« Auto-éditer un wikilivre/addappendix/reconstruction et tests du script addappendix » : différence entre les versions

Un livre de Wikilivres.
Contenu supprimé Contenu ajouté
m →‎Annexe créée avec addappendix : WL:RD : * diverses retouches
→‎script addapendix.sh : Mise à jour du script
Ligne 37 : Ligne 37 :
#P -------------------------------
#P -------------------------------


VERSION=220225
VERSION=220305
TEXTDOMAIN=addappendix
TEXTDOMAIN=addappendix
TEXTDOMAINDIR="/usr/share/locale"
#TEXTDOMAINDIR="/usr/share/locale"
TEXTDOMAINDIR="~/Add_appendix/share/locale"
export TEXTDOMAINDIR
export TEXTDOMAINDIR


Ligne 219 : Ligne 220 :
#O Copy the html files to respective directories
#O Copy the html files to respective directories
#O Create a file with the pagename $Projectdir/$Bookname.mainPage
#O Create a file with the pagename $Projectdir/$Bookname.mainPage
echo
echo "créer la page du lien local vers la page principale, 'le livre'."
echo "create the page from the local link to the main page, 'the book'"
echo $"Create the page from the local link to the main page, 'the book'"
cat $Projectdir/$Bookname.locale.list | sed "s/ /\\ /g" | cut -d ',' -f1 > $Projectdir/$Bookname.mainPage
cat $Projectdir/$Bookname.locale.list | sed "s/ /\\ /g" | cut -d ',' -f1 > $Projectdir/$Bookname.mainPage
echo "----------"
echo "----------"
Ligne 248 : Ligne 249 :
read Destination < destination
read Destination < destination
echo "Destination = $Destination"
echo "Destination = $Destination"
echo "To copy : 'cp -f ./$Source $Destination'"
echo $"Copy : 'cp -f ./$Source $Destination'"
cp -f "./$Source" "$Destination"
cp -f "./$Source" "$Destination"
done < $Projectdir/$Bookname.list
done < $Projectdir/$Bookname.list
Ligne 269 : Ligne 270 :
#O Add the link to printable book and to articles.
#O Add the link to printable book and to articles.
echo "$(gettext '== Contents == ')" >> $PageSclt
echo "$(gettext '== Contents == ')" >> $PageSclt
echo "<div style="font-zize:85%">" >> $PageSclt
echo "<div style='font-zize:85%'>" >> $PageSclt
cat $Projectdir/$Bookname.list | tr ' ' '_' | tr '\n' '%' | sed "s/%/\n\n/g" >> $PageSclt
cat $Projectdir/$Bookname.list | tr ' ' '_' | tr '\n' '%' | sed "s/%/\n\n/g" >> $PageSclt
echo "</div>" >> $PageSclt
echo "</div>" >> $PageSclt
Ligne 275 : Ligne 276 :
#0 Add the link to the source of this edition.
#0 Add the link to the source of this edition.
echo "$(gettext '=== Source for this edition === ')" >> $PageSclt
echo "$(gettext '=== Source for this edition === ')" >> $PageSclt
echo "<div style="font-zize:85%";>" >> $PageSclt
echo "<div style='font-zize:85%';>" >> $PageSclt
echo -n "https://" >> $PageSclt
echo -n "https://" >> $PageSclt
cat $Projectdir/$Bookname.mainPage | sed "s/\\\ /_/g" >> $PageSclt
cat $Projectdir/$Bookname.mainPage | sed "s/\\\ /_/g" >> $PageSclt
Ligne 288 : Ligne 289 :
#O The ''sources'' listed for each article provide more detailled licencing
#O The ''sources'' listed for each article provide more detailled licencing
#O information including the copyright status, the copyleft owner and the license conditions.
#O information including the copyright status, the copyleft owner and the license conditions.
echo -n "<span style="font-zize:85%";>" >> $PageSclt
echo -n "<span style='font-zize:85%';>" >> $PageSclt
echo "$(gettext 'The ''sources'' listed for each article provide more detailled licencing information including the copyright status, the copyleft owner and the license conditions..</span> ')" >> $PageSclt
echo "$(gettext 'The ''sources'' listed for each article provide more detailled licencing information including the copyright status, the copyleft owner and the license conditions..</span> ')" >> $PageSclt
#O or, validate one or the other of these texts :
#O or, validate one or the other of these texts :
# echo $"The texts are available with their respective licenses, however other terms may apply.<br />See the terms of use for more details : <br />https://wikimediafoundation.org/wiki/Conditions_d'utilisation.</span>" >> $PageSclt
# echo $"The texts are available with their respective licenses, however other terms may apply.<br />See the terms of use for more details : <br />https://wikimediafoundation.org/wiki/Conditions_d'utilisation.</span>" >> $PageSclt
echo " " >> $PageSclt
echo " " >> $PageSclt
echo "<div style="font-zize:72%";>" >> $PageSclt
echo "<div style='font-zize:72%';>" >> $PageSclt
echo "----------"
echo "----------"
Ligne 359 : Ligne 360 :


#O Author(s).
#O Author(s).
echo -n "$(gettext ', ''author(s) : '' ')" > $Projectdir/$line/$line.author
echo -n "$(gettext ', ''author : '' ')" > $Projectdir/$line/$line.author
cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e wgRelevantUserName | sed "s/\"/%/g" | cut -d'%' -f4 > tmp
cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e wgRelevantUserName | sed "s/\"/%/g" | cut -d'%' -f4 > tmp
if test -s tmp
if test -s tmp
then cat tmp >> $Projectdir/$line/$line.author; rm tmp
then cat tmp >> $Projectdir/$line/$line.author; rm tmp
else
else
echo "Author(s) not found ! " >> $Projectdir/$line/$line.author
echo $"Author not found ! " >> $Projectdir/$line/$line.author
if wget --spider https://xtools.wmflabs.org/articleinfo/en.wikibooks.org/$line 2>/dev/null
if wget --spider https://xtools.wmflabs.org/articleinfo/en.wikibooks.org/$line 2>/dev/null
then
then
Ligne 408 : Ligne 409 :
#O Show Headscli filename to console
#O Show Headscli filename to console
echo
echo
echo $"$Headscli english version"; echo
echo -n"$Headscli"; echo $" english version"; echo
echo $"== Images sources licenses and contributors ==" > $Headscli
echo $"== Images sources licenses and contributors ==" > $Headscli
echo $"<span style=\"font-size:85%\">The ''sources'' listed for each illustration provide more detailed licensing information, including copyright status, the holders of these rights and the license conditions.</span>" >> $Headscli
echo -n $"<span style='font-size:85%'>"; echo $"The ''sources'' listed for each illustration provide more detailed licensing information, including copyright status, the holders of these rights and the license conditions.</span>" >> $Headscli
echo " " >> $Headscli
echo " " >> $Headscli
echo "<div style=\"font-size:72%\">" >> $Headscli
echo "<div style='font-size:72%'>" >> $Headscli
echo >> $Headscli
echo >> $Headscli
#T Show the content of file Headscli cat $Headscli; exit 0
#T Show the content of file Headscli cat $Headscli; exit 0
Ligne 485 : Ligne 486 :
cat $line.license
cat $line.license
echo -n ", ''$(gettext ' author(s) : ')''" > $line.authors
echo -n ", ''$(gettext ' author : ')''" > $line.authors
rm tmp
rm tmp
cat $line.str | grep -i -n -m1 -A 1 -e Author | grep -i -e user -e utilisteur -e auteur | tr '/' '\n' | grep -i -e user -e utilisteur -e auteur | cut -d '"' -f1 > tmp
cat $line.str | grep -i -n -m1 -A 1 -e Author | grep -i -e user -e utilisteur -e auteur | tr '/' '\n' | grep -i -e user -e utilisteur -e auteur | cut -d '"' -f1 > tmp
Ligne 575 : Ligne 576 :
#T echo "*** commonshtml.list ***"; cat commonshtml.listexit 0
#T echo "*** commonshtml.list ***"; cat commonshtml.listexit 0
#O Copy article name in file $Bookname.sclipco
#O Copy article name in file $Bookname.sclipco
echo "'''Article : $pjline'''<br \>" >> $Pagesclipco
echo "'''Article : $pjline'''<br />" >> $Pagesclipco
echo "'''Article : $pjline'''"
echo "'''Article : $pjline'''"


Ligne 621 : Ligne 622 :
#O authors.
#O authors.
rm -rf tmp
rm -rf tmp
#Pwww echo -n ", ''$(gettext ' authors : ')'' " > $htmlline.co.authors
#Pwww echo -n ", ''$(gettext ' author : ')'' " > $htmlline.co.authors
#echo -n $", ''authors :''" > $htmlline.co.authors
#echo -n $", ''author : ''" > $htmlline.co.authors
echo -n ", ''authors :''" > $htmlline.co.authors
echo -n ", ''author : ''" > $htmlline.co.authors
#Test cat tmp; echo "$htmlline.co.authors"; exit -1
#Test cat tmp; echo "$htmlline.co.authors"; exit -1
cat $htmlline.co.str | grep -i -n -m1 -A 1 -e Author -e Auteur | tr '/' '\n' | grep -i -e user -e utilisteur -e auteur -e author | cut -d '"' -f1 | grep -i -e user -e utilisteur -e auteur -e author > tmp
cat $htmlline.co.str | grep -i -n -m1 -A 1 -e Author -e Auteur | tr '/' '\n' | grep -i -e user -e utilisteur -e auteur -e author | cut -d '"' -f1 | grep -i -e user -e utilisteur -e auteur -e author > tmp

Version du 5 mars 2022 à 11:49

Attention : modification en cours !link={{{link}}}

Un contributeur est en train de retravailler en profondeur cette page. Vous êtes prié(e) d'éviter de le modifier pour limiter les risques de conflit de versions jusqu'à disparition de cet avertissement. Merci.

  • Cette page concerne le logiciel addappendix mis en paquet

preinstall-usr-local.bash

#!/bin/bash
#H Header doc
#H -------------------------------
#H File : tests/preinstall-usr-local.bash
#H Syntax : ./preinstall-usr-local.bash./ [ ?  | -v ]
#H Created : 220118 by <wikibooks user>
#H Updated : 220118 by ... for
#O Organizational chart
#O -------------------------------
#P Programmers notes
#P -------------------------------
VERSION=220118

#O Script begin here
sudo install -d /usr/local/datas
sudo cp /home/cardabela/addappendix-211219/datas/*.dat /usr/local/datas/.
ls /usr/local/datas
#O Script end

script addapendix.sh

#!/bin/bash
#H Header doc
#H -------------------------------
#H File : ~/Add_appendix/tests/13-(pkg)-addappendix.sh/addapendix.sh
#H Syntax : addapendix [ ?  | --v ]
#H Created : 220113 by GC
#H Updated : 220221 by GC for page ScliC
#O Organizational chart
#O -------------------------------
#P Programmers notes
#P -------------------------------

VERSION=220305
TEXTDOMAIN=addappendix
#TEXTDOMAINDIR="/usr/share/locale"
TEXTDOMAINDIR="~/Add_appendix/share/locale"
export TEXTDOMAINDIR

#P . gettext for translation
. gettext.sh

#O Script begin here
#O If parameters is empty
    if test -z $1
#O Then print the short syntax ant exit -1
    then 
	  echo -n -e "\033[31m"
      echo -n $"No parameter. addappendix [ <full url of book> | ? | --v ]"
	  echo -e "\033[0m"
	  exit -1
	fi

#O If firt parameter is '?'
	if [ "$1" = "?" ]
#O Then print syntax whih examples and exit 0
    then 
	  echo -n -e "\033[32m"
	  echo $"Syntax: addappendix [ <full url of book> | --v ]"
	  echo $"  Example 1 : addappendix https://en.wikibooks.org/wiki/Wikibooks:Collections/Guide_to_Unix"
	  echo $"  Example 2 : addappendix https://fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel"	  
	  echo -e "\033[0m"
      exit 0
	fi
#O IF first parameter is "--v"
    if [ "$1" = "--v" ]
#O Then print addapendix version
	then
	  echo -n -e "\033[32m"
	  echo -n $"addapendix version : $VERSION"
	  echo -e "\033[0m"
	  exit 0
	fi
	
#O *** First parameter analysis ***
#T    echo "$1"
#O Test if the first parameter points to wikibooks.org/wiki
    if echo $1 | grep wikibooks.org/wiki 
	then 
	  echo -n -e "\033[32m"	
	  echo -n $"  is a wiki-book"
	  echo -e "\033[0m"
    else
	  echo -n -e "\033[31m"	
	  echo -n $"$1 is not a wiki-book on wikibooks.org"
	  echo -e "\033[0m"
      exit -1
    fi
#O Check if $1 file exist
    if wget --spider $1 2>/dev/null; then
	  echo -n -e "\033[32m"		
      echo -n $"File $1 is found"
	  echo -e "\033[0m"	  
    else
	  echo -n -e "\033[31m"		
      echo -n $"File $1 is not found"
	  echo -e "\033[0m"
      exit -1
    fi

#O Find the bookname
    echo $1 | awk -F"/" ' { print $NF }' > bookname.txt
	read Bookname<bookname.txt
	echo; echo -n -e "\033[1;32m"		
	echo -n $"Book name : $Bookname"
	echo -e "\033[0m"
    
    echo $1 | awk -F"/" ' { print $3 }' > site.txt
    read Site<site.txt
	echo; echo -n -e "\033[1;32m"		
	echo -n $"Site name : $Site"
	echo -e "\033[0m"
    
    echo $1 | awk -F"/" ' { print $5 }' > compilations.txt
	read Compilations<compilations.txt
	echo -n -e "\033[1;32m"		
	echo -n $"Compilations name: $Compilations"
	echo -e "\033[0m"
	if [ "$Compilations" = "Wikilivres:Compilations" ]; then Suffix=compiled; fi
	if [ "$Compilations" = "Wikibooks:Collections" ]; then Suffix=compiled; fi
    if test -z $Suffix; then Suffix=compiled; fi
    echo -n -e "\033[1;32m"		
	echo "Suffix = $Suffix"
	echo -e "\033[0m"

#O Create Bookname directory
    install -d ~/Add_appendix/books/$Bookname
	Projectdir=~/Add_appendix/books/$Bookname
#O Create temp directory in Wordir
	Workdir=~/Add_appendix
	mkdir -p $Workdir/temp
#O ============================================================================
##O Create the file bookname.suffix
#T ***********************
#O Create $Projectdir/resources/TMP to download
    mkdir -p $Projectdir/resources/TMP
#O Download $1
    cd $Projectdir/resources/TMP
    rm -Rf $Projectdir/resources/TMP/* 2> /dev/null
    wget -N $1 -o $Workdir/temp/wget-log-télécharger.txt
    ls -1 > ../filename.txt
    read Filename<../filename.txt
    if [ "$Filename" = "filename.txt" ]; then echo $"line 113: \$Filename = filename.txt error, exit -1"; exit -1; fi
    rm ../filename.txt
#O go up in the directory resources and rename 'TMP' '$Filename'
    cd ..
    if test -e $Filename; then rm -R $Filename; fi
    if test -d $Filename 2>/dev/null
	then rm -R $Filename 2>/dev/null
    fi 
    mv TMP $Filename
    cd $Filename
#T  ls -al 
    cat $Filename|grep "<li><a href=">extract-li
    cat extract-li | sed "s/title=\"/\n[[/g" | grep -v "<li><a href=" |sed "s/\">/]]\n/g"|grep -v "</a>\|Cat\|<div" >extract-li1
    cat $Filename|grep "<dd><a href=">extract-dd
    cat extract-dd | sed "s/title=\"/\n[[/g" | grep -v "<dd><a href=" |sed "s/\">/]]\n/g"|grep -v "</a>" >extract-dd1
    cat extract-dd1 > $Bookname.$Suffix
    cat extract-li1 >> $Bookname.$Suffix
#T    echo "$Bookname.$Suffix = "
    cp $Bookname.$Suffix $Projectdir/$Bookname.$Suffix
#T ***********************
    if test -e $Projectdir/$Bookname.$Suffix
    then
	  echo -n -e "\033[1;32m"		
      echo -n "$Bookname.$Suffix : "
	  echo -e "\033[0m"
      cat $Projectdir/$Bookname.$Suffix
    fi
#O ============================================================================	
#O Download the book in html form
#O Télécharger le site récursivement avec une profondeur infinie ( -linf ), \
#O convertit les liens pour une consultation en local ( -k ), \
#O rapatrie tous les fichiers nécessaires à l'affichage convenable d'une page HTML ( -p ) \
#O et renomme toutes les pages HTML avec l'extension .html ( -E ) 
    echo; echo "Dowload $1"
    wget -r -linf -k -p -E "$1" -o $Workdir/temp/wget-log-télécharger.txt

# Create lists
   if test -e /usr/local/datas/content_cleaner.dat; then Datasdir=/usr/local/datas
   else
     echo -n -e "\033[12;31m"
     echo -n $"content_cleaner.dat not found in /usr/local/datas"
     echo -e "\33[0m"
     exit -1
   fi

   if test -e "$Projectdir/$Bookname.compiled"
   then 
   {
     echo "$(gettext ' Found Compiled page : ')$Projectdir/$Bookname.compiled"; echo
     echo " Create $Projectdir/$Bookname.list with :"; echo "    $Projectdir/$Bookname.compiled"; echo
     cat "$Projectdir/$Bookname.compiled" | sed -f $Datasdir/content_cleaner.dat > $Projectdir/$Bookname.compiled.cleaned
     cat "$Projectdir/$Bookname.compiled.cleaned" | grep -v '=' | sed "s/\[\[/https:\/\/$Site\/wiki\//g" | sed "s/\]\]//g" | grep "wiki" | tr ' ' '_' | cut -d '|' -f1 > $Projectdir/$Bookname.list
     cat "$Projectdir/$Bookname.compiled.cleaned" | grep -v '=' | sed "s/\[\[//g" | sed "s/\]\]//g" | cut -d '|' -f1 > $Projectdir/$Bookname.prj
   }
   fi
#T Print $Projectdir/$Bookname.prj
    cat $Projectdir/$Bookname.prj

#O Download the complete book structure in project directory
    cd $Projectdir
    echo $"download all sub-directories of the book '$Bookname'"
    wget -r -linf -k -p -E  -i $Bookname.list -o $Workdir/temp/wget-log-download.txt
    echo "----------"
#T Testspoint    exit 0
#O Move the html pages into working sub-directories to document the pages and sub pages
#O   create a local list to the downloaded directories $ Projectdir/$1.locale.list
      echo "create the complete concatenated hierarchy of the directories of the book '$Bookname'"
      cat $Projectdir/$Bookname.list | sed "s/https:\/\///g" | sed "s/\ /\\\ /g" | tr '\n' ',' > $Projectdir/$Bookname.locale.list
      echo "   Concatenated local list $Projectdir/$Bookname.locale.list :"
      echo ""
      cat $Projectdir/$Bookname.locale.list
      echo "----------"  
#O Copy the html files to respective directories
#O   Create a file with the pagename $Projectdir/$Bookname.mainPage
      echo
      echo $"Create the page from the local link to the main page, 'the book'"
      cat $Projectdir/$Bookname.locale.list | sed "s/ /\\ /g" | cut -d ',' -f1 > $Projectdir/$Bookname.mainPage
      echo "----------"
#O   Initialize the variable $mainPage
      read mainPage < $Projectdir/$Bookname.mainPage
      echo "variable mainPage = $mainPage"     
    #T cat $Projet/$1.mainPage | awk -F"/" '{print NF}' > nbchamps
    #T read NbChamps < nbchamps
    #T echo "Variable NbChamps = $NbChamps"
      echo "----------"
#O   Create a file of the working directories to be created.
      ls "$mainPage" | sed "s/.html//g"  > $Projectdir/$Bookname.dirs
      echo "sub-working-diectories : "
      cat  $Projectdir/$Bookname.dirs
      echo "----------" 
#O   Copy the html pages and subpages in the respective directories
      while read line
      do
        echo "$line".html | sed "s/https:\/\///g" | tr '\n' ' ' > source
        read Source < source
        echo "Source = $Source"

        echo "$line" | awk -F"/" '{ print $NF }'| tr '\n' '/' > destination 
        read dir < destination
        mkdir $dir
        echo "$line".html | awk -F"/" '{ print $NF }' >> destination
        read Destination < destination
        echo "Destination = $Destination"
        echo $"Copy : 'cp -f ./$Source $Destination'"
        cp -f "./$Source" "$Destination"
      done < $Projectdir/$Bookname.list
      rm source ; rm destination

#O ============================================================================
#O Create variable PageSclt
    PageSclt=$Projectdir/$Bookname.sclt
#O File creation '$Bookname.sclt' and print the contents.
    echo "----------"
    echo "$(gettext '= Appendix = ')" > $PageSclt
    echo >> $PageSclt

#O Add <references />
    echo "$(gettext '== References == ')" >> $PageSclt
    echo "$(gettext '<references /> ')" >> $PageSclt
    echo  >> $PageSclt
    echo "<div style='page-break-before:always'></div>" >> $PageSclt
    
#O Add the link to printable book and to articles.
    echo "$(gettext '== Contents == ')" >> $PageSclt
    echo "<div style='font-zize:85%'>" >> $PageSclt
    cat $Projectdir/$Bookname.list | tr ' ' '_' | tr '\n' '%' | sed "s/%/\n\n/g" >> $PageSclt
    echo "</div>" >> $PageSclt

#0 Add the link to the source of this edition.
    echo "$(gettext '=== Source for this edition === ')" >> $PageSclt
    echo "<div style='font-zize:85%';>" >> $PageSclt
    echo -n "https://" >> $PageSclt
    cat $Projectdir/$Bookname.mainPage | sed "s/\\\ /_/g" >> $PageSclt
#P other version : cat $Projectdir/Bookname".list" | tr ' ' '_' | tr '\n' '%' | sed "s/%/%\n/g" | grep $1% | tr -d % >> $PageSclt
    echo "</div>" >> $PageSclt
    echo " " >> $PageSclt
    echo "<div style='page-break-before:always'></div>" >> $PageSclt

#O Create section 'Article', 'Source', 'License', 'Contributors(?)'
    echo "$(gettext '== Articles Sources, and contributors == ')" >> $PageSclt
#O   add the text : style PediaPress or personalized.
#O   The ''sources'' listed for each article provide more detailled licencing
#O   information including the copyright status, the copyleft owner and the license conditions.
    echo -n "<span style='font-zize:85%';>" >> $PageSclt
    echo "$(gettext 'The ''sources'' listed for each article provide more detailled licencing information including the copyright status, the copyleft owner and the license conditions..</span> ')" >> $PageSclt
#O or, validate one or the other of these texts : 
#   echo $"The texts are available with their respective licenses, however other terms may apply.<br />See the terms of use for more details : <br />https://wikimediafoundation.org/wiki/Conditions_d'utilisation.</span>" >> $PageSclt
    echo " " >> $PageSclt
    echo "<div style='font-zize:72%';>" >> $PageSclt
    
echo "----------"
#O Create or recreate the list-file $Projectdir/$1.pj
    cat $Projectdir/$Bookname.list | awk -F"/" '{ print $NF }' > $Projectdir/$Bookname.pj
    Pjlist=$Projectdir/$Bookname.pj
    echo "Pjlist : "$PjList

#O While exist line in file $PjList ,
    while read line
    do    
#O    Print the line read,
       echo
       echo "$(gettext '   line read = ')"$line
       echo
#O    Extract and copy all strings from the html file
#O      $line.html in the file $line.str and add to screen
#T pwd
       mkd -pws '**' "$line/$line.html" $Projectdir/$line/$line.tmp | tr ',' '\n' > $Projectdir/$line/$line.str
#T break
#O    Create the documentation file of pages :
       echo "*** References : articles, src, lic, contrib. "
    
#O    Print article,
       if [ $line != $Bookname ]
       then  
         echo "'''$line'''" >> $PageSclt
       fi
       echo "'''"$line"'''" > $Projectdir/$line/$line.article
       cat $Projectdir/$line/$line.article
       
#O    Print source,
       echo -n "$(gettext ', ''source :'' ')https://"$Site"/w/index.php?oldid=" > $Projectdir/$line/$line.RevisionId
       cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e wgRevisionId | tr -d ':' | sed "s/\"/%/g" | cut -d'%' -f3 >> $Projectdir/$line/$line.RevisionId
       if [ "$line" != "$Bookname" ]
       then          
         cat $Projectdir/$line/$line.RevisionId  >> $PageSclt
       fi
       cat $Projectdir/$line/$line.RevisionId
       

#P    license du bas de page : 
#P    <li id="footer-info-copyright">Les textes sont disponibles sous <a href="https://creativecommons.org/licenses/by-sa/3.0/">license Creative Commons attribution partage à l’identique</a> ; d’autres termes peuvent s’appliquer.<br/>
#P      Voyez les <a href="https://wikimediafoundation.org/wiki/Conditions_d'utilisation">termes d’utilisation</a> pour plus de détails.<br/></li>
#P
#P    Print license :
#P    <link rel="license" href="https://creativecommons.org/licenses/by-sa/3.0/"/>


#T echo ", ''Copyright :''"  >> ArticleUn.tmp
#T cat ArticleUn.str | grep -n -m 1 -i -e license | sed "s/\"\//%\//g" | cut -d'%' -f2 |sed "s/\/\//https:\/\//g"  >> ArticleUn.tmp
#O    Print license :
       echo -n "$(gettext ', ''license :'' ')" > $Projectdir/$line/$line.license
    #T cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e license | sed "s/\"\//%\//g" | cut -d'%' -f4 >> $Projectdir/$line/$line.license
       cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e license | sed "s/\"\//%\//g" | tr '"' '%' | cut -d'%' -f4 >> $Projectdir/$line/$line.license
    #T cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e license | sed "s/\"\//%\//g" | cut -d'%' -f2 | sed "s/\/\//https:\/\//g" >> $Projectdir/$line/$line.license
       if [ $line != $Bookname ]
       then  
         cat $Projectdir/$line/$line.license >> $PageSclt
       fi
       cat $Projectdir/$line/$line.license
       #
       #P spécial pour bas de page fr ## 
       cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e footer-info-copyright | sed "s/\"\//%\//g" | tr '"' '%' | cut -d'%' -f4  > $Projectdir/$line/$line.license

#O    Author(s).  
       echo -n "$(gettext ', ''author : '' ')" > $Projectdir/$line/$line.author       
       cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e wgRelevantUserName | sed "s/\"/%/g" | cut -d'%' -f4 > tmp
       if test -s tmp 
         then cat tmp >> $Projectdir/$line/$line.author; rm tmp
         else 
           echo $"Author not found ! " >> $Projectdir/$line/$line.author
           if wget --spider https://xtools.wmflabs.org/articleinfo/en.wikibooks.org/$line 2>/dev/null
           then
             echo "$(gettext '. -see :') https://xtools.wmflabs.org/articleinfo/$Sitename/$line" >> $Projectdir/$line/$line.author
           elif wget --spider https://xtools.wmflabs.org/articleinfo/$Sitename/$Bookname/$line 2>/dev/null
           then 
             echo "$(gettext '. -see :') https://xtools.wmflabs.org/articleinfo/$Sitename/$Bookname/$line" >> $Projectdir/$line/$line.author
           else 
             cat $Projectdir/$line/$line.str | grep -n -m 1 -i -e wgRelevantPageName | sed "s/\"/%/g" | cut -d'%' -f4 > tmp
             if test -s tmp
             then
               #T echo "&action=history" >> tmp
               echo -n "$(gettext '. -see ''contributors'' in book, history page of ')"  >> $Projectdir/$line/$line.author
               cat tmp >> $Projectdir/$line/$line.author; rm tmp
             fi
           fi
       fi
       #https://xtools.wmflabs.org/articleinfo/en.wikibooks.org/Guide_to_Unix/Introduction
       
       if [ $line != $Bookname ]
       then
         cat $Projectdir/$line/$line.author >> $PageSclt
         cat $Projectdir/$line/$line.author
       fi
       
       echo >> $PageSclt
       
#O end of while.
    done < $Pjlist
    
#O Add end of div and pagre break
    echo "</div>" >> $PageSclt
    echo "<div style='page-break-before:always'></div>" >> $PageSclt
    
#P
#P Création de la page Bookname.scli (sources contributeurs licenses des images)
#P

#0 Initialisation de la variable d'entête des fichiers scli.
    Headscli=$Projectdir/$Bookname.scli
    echo > $Headscli
#O Afficher le nom du fichier Headscli à la console 
#O Show Headscli filename to console
    echo
    echo -n"$Headscli"; echo $" english version"; echo
    echo $"== Images sources licenses and contributors ==" > $Headscli
    echo -n $"<span style='font-size:85%'>"; echo $"The ''sources'' listed for each illustration provide more detailed licensing information, including copyright status, the holders of these rights and the license conditions.</span>" >>  $Headscli
    echo " " >>  $Headscli
    echo "<div style='font-size:72%'>" >> $Headscli
    echo >>  $Headscli
#T Show the content of file Headscli    cat $Headscli; exit 0

#O ============================================================================
#O If the file $Projectdir/$Bookname/$Bookname.str exist, create the PageSclic in classic order
#O Si la page $Projectdir/$Bookname/$Bookname.str existe, créer la page $PageSclic contenant les images dans un ordre classique
    if test -e $Projectdir/$Bookname/$Bookname.str
    then
#O Select lines containing 'fichier:', 'file', image and create bookname.files
    cat $Projectdir/$Bookname/$Bookname.str | grep -n -i -e fichier: -e file: -e image: > $Projectdir/$Bookname/$Bookname.files
#O Select lines containing 'fichier:', '.jpg', '.png', '.gif' and create bookname.pict
    cat $Projectdir/$Bookname/$Bookname.str | grep -n -i -e fichier: -e .jpg -e .png -e .gif > $Projectdir/$Bookname/$Bookname.picts
#O Sélect in bookname.files, the lines containing 'title', remove the tag <div> cut ">" and select the last champ to create bookname.illustrations
    cat $Projectdir/$Bookname/$Bookname.files | grep title |sed "s/<\/div>//g" | awk -F">" '{print $NF}' > $Projectdir/$Bookname/$Bookname.illustrations
#O Dans le fichier .files avec le séparateur "=" imprimer dans le champ 'i' le retour chariot, sélectionner le slignes conteneant 'https', remplacer le caractère '"' par '!'
#O   et sélectionner le trosième champ, puis relélectionner la ligne contenant https, remplacer le caractère '>' par !, supprimer </a, puis supprimer le caractère '!'
#O   et créer le fichier.links
       cat $Projectdir/$Bookname/$Bookname.files | awk -F"=" '{for (i=1;i<=NF;i++) print $i "\n"}' | grep https | sed "s/\"/!/g" | cut -d '!' -f3 \
       | grep https | tr '>' ! | sed "s/<\/a//g" |sed "s/!//g" > $Projectdir/$Bookname/$Bookname.links
#OF Télécharger les fichiers contenus dans la liste du fichier bookname.links
#O Download the files contained in the list of the bookname.links file
    wget -P $Projectdir/$Bookname -r -linf -k -p -E -i $Projectdir/$Bookname/$Bookname.links       
#O Copy html liles from ./commons.wikimedia.org/wiki in the curreny directory
    cd $Bookname
    if test -e commons.wikimedia.org; then cp -R commons.wikimedia.org/wiki/*.html . ; fi
#O html.list initialization
    echo -n "" > html.list
    if test -s $Projectdir/$Bookname/$Bookname.links
    then 
    { 
      echo $"$Projectdir/$Bookname/$Bookname.links is not empty" 
#OF  Tant qu'on lit des lignes dans le fichier .links, lire les images et les lister dans html.list  
#O   As long as there is a line in file html.links, read the line and copy it to html.list
      while read line
      do
      echo $line | awk -F"/" '{print $NF}' | cut -d '%' -f1 | cut -d '.' -f1 > tmp
      read Image < tmp
      ls $Image*.html  >> html.list
      echo "Image : "$Image.html  
      done < $Projectdir/$Bookname/$Bookname.links
    }
    elif test -s $Projectdir/html.list; then cp $Projectdir/html.list $Projectdir/$Bookname/html.list
    else echo $"No images found in $Projectdir/$Bookname"; exit 0
    fi
    
    echo " *** References : image, src, lic, contrib."
#O As long as there is a line in file html.list extract illustrations, sources, licenses, authors(s)	   
    while read line
    do
       echo
       echo
       echo "$(gettext '**** line = ')$line ****"
       echo
	   
       mkd -pw '**' $line $line.tmp
       cat $line.tmp | tr ',' '\n' > $line.str 
     
       echo -n "'''$(gettext 'Illustration : ')'''" > $line.title
       cat $line.str |grep wgTitle | cut -d '"' -f4 >> $line.title
       cat $line.title >> $PageSclic 
       cat $line.title

       echo -n ", ''$(gettext ' source : ')''https://"$Site"/w/index.php?title= " > $line.source
       echo $line | sed "s/.html//g" >> $line.source
       cat $line.source >> $PageSclic
       cat $line.source

       echo -n ", ''$(gettext ' license : ')''" > $line.license
       cat $line.str | grep licensetpl_short | sed "s/<td>//g" | sed "s/<span class//g" | sed "s/<\/span>//g" | sed "s/style=\"display:none;\"//g" | tr '=' '\n' | grep licensetpl_short | awk -F">" '{print $NF}' >> $line.license
       cat $line.license >> $PageSclic
       cat $line.license
	   
       echo -n ", ''$(gettext ' author : ')''" > $line.authors
       rm tmp
       cat $line.str | grep -i -n -m1 -A 1 -e Author | grep -i -e user -e utilisteur -e auteur | tr '/' '\n' | grep -i -e user -e utilisteur -e auteur | cut -d '"' -f1 > tmp
       if test -s tmp 
       then cat tmp >> $line.authors
       else echo "-" >> $line.authors
       fi
       cat $line.authors >> $PageSclic
       cat $line.authors	   
       echo >> $PageSclic
    done < html.list
#P bas de la page avant la nouvelle page	
    echo "</div>" >> $PageSclic
    
#T    echo "$(gettext '{{Newpage}} ')" >> $PageSclic
    echo "<div style='page-break-before:always'></div>" >> $PageSclic
#O end of test -e $Projectdir/$Bookname/$Bookname.str  
    else 
      echo  -e "\033[31m"
      echo $"Can not create $Projectdir/$Bookname/$Bookname.sclic. URL page of book is not found"
	  echo -e "\033[0m"
#O end of create PageSclic
    fi
    
#O ============================================================================
#O Create variable Pagesclipco
    Pagesclipco="$Projectdir/$Bookname.sclipco"
    echo $Pagesclipco
#O Wikibooks sclipco personalized page initialization with the title 'Images sources, etc.
    cat $Projectdir/$Bookname.scli > $Pagesclipco
#Test 
cat $Pagesclipco
#O ============================================================================
 
#O Create an identification loop of the directories corresponding to the articles
#O As long as we can read the lines of the file $Projectdir/$Bookname.pj
    while read pjline
    do
#O   If the line read is not $Bookname (name of the book)
#T    echo "line read : " $line
      if [ $pjline != $Bookname ]
#O     Then:
        then
#O     Enter in the article directory,
        cd $Projectdir/$pjline
#O     Create image documentation files
#O     Open the stream of $ Projectdir/$Bookname/$Bookname.str of the image and select 
#O       the character strings containing: File:, Image; and put in files
#O       $Projectdir/$line/$line/.files, .pict, .illustration, .images, .links
        cat $Projectdir/$pjline/$pjline.str | grep -n -i -e Fichier: -e file: -e image: | sed -f $RepCom/$Conversions > $Projectdir/$pjline/$pjline.files
        cat $Projectdir/$pjline/$pjline.str | grep -n -i -e fichier: -e .jpg -e .png -e .gif | sed -f $RepCom/$Conversions> $Projectdir/$pjline/$pjline.picts
        cat $Projectdir/$pjline/$pjline.files | grep title |sed "s/<\/div>//g" | awk -F">" '{print $NF}' > $Projectdir/$pjline/$pjline.illustrations
#T        cat $Projectdir/$pjline/$pjline.files | awk -F"=" '{for (i=1;i<=NF;i++) print $i "\n"}' | grep https | sed "s/\"/!/g" | cut -d '!' -f3 | grep https | tr '>' ! | sed "s/<\/a//g" |sed "s/!//g" > $Projectdir/$pjline/$pjline.links
        cat $Projectdir/$pjline/$pjline.files | awk -F"=" '{for (i=1;i<=NF;i++) print $i "\n"}' | grep https://$Site | sed "s/\"/!/g" | cut -d '!' -f2 > $Projectdir/$pjline/$pjline.images
#Tbreak
#O Transform the links of the image file on wikibooks into an image file on commons
    cat $Projectdir/$pjline/$pjline.images | sed "s/$Site/commons.wikimedia.org/g"| sed "s/Fichier/File/g" > $Projectdir/$pjline/$pjline.commonsimages
#O     Download the image files from the wikimedia server.
#P     Note: the -N option allows you to avoid downloading an up-to-date file,
#P      and without adding a numbering.
#T      #T wget -N -P $Projectdir/$pjline -i $Projectdir/$pjline/$pjline.images
        wget -P $Projectdir/$pjline -r -linf -k -p -E  -i $Projectdir/$pjline/$pjline.commonsimages
#T     echo "*** Commonsimages ***"; cat $Projectdir/$pjline/$pjline.commonsimages; exit 0
#O     Copy the downloaded images to the directory of the current article..
        cp $Projectdir/$pjline/commons.wikimedia.org/wiki/*.html $Projectdir/$pjline/.           
#O     Initialize the commonshtml.list file with empty text.
        echo -n "" > commonshtml.doublons  
#O     List the image files in the order of printing or display,
#O       using the list $Projectdir/$pjline/$pjline.commons.images
#O     As long as we can read lines in $Projectdir/$pjline/$pictline.images
        while read pictline
        do
#O       Cut the lignes at carriage return, sélect the last field and add '.html'
          #echo $pictline | awk -F"/" '{for (i=1;i<=NF;i+=2) print $i "\n"}' #| cut -d '%' -f1 | cut -d '.' -f1 > tmp
          echo $pictline | awk -F"/" '{ print $NF".html"}' >> commonshtml.doublons
#O       Cut the duplicated lines and select even fields.
          echo -n "" > commonshtml.list
          awk 'BEGIN { FILENAME }
                {memfile [NR] = $0 }
               END   { for ( i = 1 ; i <= NR ; i=i+2 ) {
                       print memfile[i] >> "commonshtml.list"
                       } 
	                   # print "Fin"
                     } ' commonshtml.doublons
#O     End of while $Projectdir/$pjline/$pjline.commonsimages
        done < $Projectdir/$pjline/$pjline.commonsimages

#T     Afficher html.list
#T echo "*** commonshtml.list ***"; cat commonshtml.listexit 0
#O   Copy article name in file $Bookname.sclipco
      echo "'''Article : $pjline'''<br />" >> $Pagesclipco
      echo "'''Article : $pjline'''"


#P## Annexe version 'wikimedia commons' ##############################

#O     As long as there are (local) links in the commonshtml.list image file
        while read htmlline
        do
#O       Afficher la ligne lue,
          echo ""
	      echo ""
          echo "$(gettext ' ---- line read = $htmlline --- ')"
          echo ""
#O   With mkd (sofware), select the character strings from the image file $htmlline
#O    and copy them to $ htmlline.co.str after replacing the character ',' with 
#O    'new-line'
	  mkd -pw '**' $htmlline $htmlline.tmp
	  cat $htmlline.tmp | tr ',' '\n' > $htmlline.co.str 
#T echo "*** $htmlline.co.str : ***"; cat $htmlline.co.str; exit 0     
#O     images, 
        echo -n "'''$(gettext ' Illustration : ')'''" > $htmlline.co.title
        cat $htmlline.co.str | grep wgTitle | cut -d '"' -f4 >> $htmlline.co.title
	    cat $htmlline.co.title >> $Pagesclipco 
	    cat $htmlline.co.title
#T echo "*** $htmlline.co.title : ***"; cat $htmlline.co.title; exit 0
#O     source, 
        echo -n $", ''source : ''https://commons.wikimedia.org/wiki/" > $htmlline.co.source
        ##echo -n ",''$(gettext 'source : ')''https://commons.wikimedia.org/wiki/" > $htmlline.co.source
	    echo -n $htmlline | sed "s/.html//g" | sed "s/.str//g" >> $htmlline.co.source
        if [ "$Site" = "fr.wikibooks.org" ]; then echo "?uselang=fr" >> $htmlline.co.source
        elif [ "$Site" = "en.wikibooks.org" ]; then echo "?uselang=en" >> $htmlline.co.source
        else echo
        fi
        cat $htmlline.co.source >> $Pagesclipco
        cat $htmlline.co.source
#T echo "*** $htmlline.co.source : ***"; cat $htmlline.co.source; exit 0
#O     license, 
        echo -n ", ''$(gettext 'license : ')'' " > $htmlline.co.license
	    cat $htmlline.co.str | grep licensetpl_short | sed "s/<td>//g" | sed "s/<span class//g" | sed "s/<\/span>//g" | sed "s/style=\"display:none;\"//g" | tr '=' '\n' | grep licensetpl_short | awk -F">" '{print $NF}' >> $htmlline.co.license
        cat $htmlline.co.license >> $Pagesclipco
	    cat $htmlline.co.license
#T echo "*** $htmlline.co.license : ***"; cat $htmlline.co.license; exit 0	   

#O     authors. 
        rm -rf tmp
        #Pwww echo -n ", ''$(gettext ' author : ')'' " > $htmlline.co.authors
        #echo -n $", ''author : ''" > $htmlline.co.authors
        echo -n ", ''author : ''" > $htmlline.co.authors
#Test cat tmp; echo "$htmlline.co.authors"; exit -1
	    cat $htmlline.co.str | grep -i -n -m1 -A 1 -e Author -e Auteur | tr '/' '\n' | grep -i -e user -e utilisteur -e auteur -e author | cut -d '"' -f1 | grep -i -e user -e utilisteur -e auteur -e author > tmp        
        if test -s tmp; then echo ; else echo "-" > tmp; fi
        cat tmp >> $htmlline.co.authors
        cat $htmlline.co.authors >> $Pagesclipco
        cat $htmlline.co.authors 	   
#O   Finish the page $Pagesclipco
      echo "" >> $Pagesclipco
#O   End of as long as there are lines in commonshtml.list
      done < commonshtml.list
#O  End of 'if the line is not the name of book'.
     fi      
#O End of while line in $Bookname.pj
    done < $Projectdir/$Bookname.pj
#O clean intermediate files
    rm -rf tmp
#O End of page $Pagesclipco 
    echo "</div>" >> $Pagesclipco
    #T echo "$(gettext ' {{Newpage}} ')" >> $Pagesclipco 
    echo "<div style='page-break-before:always'></div>" >> $Pagesclipco
#O ============================================================================	
#O Creating Bookname.appendix
    cat $Projectdir/$Bookname.sclt>$Projectdir/$Bookname.appendix
    cat $Projectdir/$Bookname.sclipco>>$Projectdir/$Bookname.appendix
#O ============================================================================	
#O Display file Bookname.appendix
    echo;echo -e "\033[1;32mcopy and paste the text displayed and add after the book $Bookname.\033[0m"
    cat $Projectdir/$Bookname.appendix 
    echo;echo -e "\033[1;32mcopy and paste the text displayed and add after the book.$Bookname\033[0m"

    exit 0
#O addappendix script end

script tests_addappendix.bash

#!/bin/bash
#!/bin/bash
#H Header doc
#H -------------------------------
#H File : tests/addappendix/tests_addapendix.bash
#H Syntax : ./tests_addapendix.bash
#H Created : 220113 by GC
#H Updated : 220113 by ... for
#O Organizational chart
#O -------------------------------
#P Programmers notes
#P -------------------------------
VERSION=220119
#P Before execute this tests, execute tests/preinstall-usr-local.bash to install datas directory in /usr/local

echo -e "\033[1;033mtest addappendix.sh with first param empty\033[0m"
./addappendix.sh
sleep 3
echo "----"

echo -e "\033[1;033mtest addappendix.sh with first param = '?'\033[0m"
./addappendix.sh ?
sleep 3
echo "----"

echo -e "\033[1;033mtest addappendix.sh with first param = '--v'\033[0m"
./addappendix.sh --v
sleep 3
echo "----"

echo -e "\033[1;033mtest addappendix.sh with param = https://fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel\033[0m"
./addappendix.sh https://fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel

Résultat des tests

Au Terminal
cardabela@jpl-W230SS:~/addappendix-211219/tests/addapendix$ ./tests_addapendix.bash 
test addappendix.sh with first param empty
No parameter. addappendix [ <full url of book> | ? | -v ]
----
test addappendix.sh with first param = '?'
Syntax: addappendix [ <full url of book> | -v ]
  Example 1 : addappendix https://en.wikibooks.org/wiki/Wikibooks:Collections/Guide_to_Unix
  Example 2 : addappendix https://fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel

----
test addappendix.sh with first param = '-v'
addapendix version : 220117
----
test addappendix.sh with param = https://fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel
https://fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel
  is a wiki-books
File https://fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel is found

Book name : Faire_sa_fleur_de_sel
Compilations name: Wikilivres:Compilations
Suffix = compiled

Faire_sa_fleur_de_sel.compiled : 
[[Faire fleurir le sel/Couverture]]
[[Faire fleurir le sel/Introduction]]
[[Faire fleurir le sel/Préparation]]
[[Faire fleurir le sel/Filtrer et aseptiser]]
[[Faire fleurir le sel/Récolter]]

Dowload https://fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel

List of files created 31/01/2022
 find Add_appendix/books/Faire_sa_fleur_de_sel/
Add_appendix/books/Faire_sa_fleur_de_sel/
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.prj
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org/robots.txt
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki/Faire_fleurir_le_sel
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki/Faire_fleurir_le_sel/Introduction.html
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki/Faire_fleurir_le_sel/Récolter.html
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki/Faire_fleurir_le_sel/Préparation.html
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki/Faire_fleurir_le_sel/Filtrer_et_aseptiser.html
Add_appendix/books/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki/Faire_fleurir_le_sel/Couverture.html
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.dirs
Add_appendix/books/Faire_sa_fleur_de_sel/Filtrer_et_aseptiser
Add_appendix/books/Faire_sa_fleur_de_sel/Filtrer_et_aseptiser/Filtre
Add_appendix/books/Faire_sa_fleur_de_sel/Filtrer_et_aseptiser/Filtrer_et_aseptiser.license
Add_appendix/books/Faire_sa_fleur_de_sel/Filtrer_et_aseptiser/Filtrer_et_aseptiser.author
Add_appendix/books/Faire_sa_fleur_de_sel/Filtrer_et_aseptiser/Filtrer_et_aseptiser.html
Add_appendix/books/Faire_sa_fleur_de_sel/Filtrer_et_aseptiser/Filtrer_et_aseptiser.str
Add_appendix/books/Faire_sa_fleur_de_sel/Filtrer_et_aseptiser/Filtrer_et_aseptiser.article
Add_appendix/books/Faire_sa_fleur_de_sel/Filtrer_et_aseptiser/Filtrer_et_aseptiser.RevisionId
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.sclt
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.compiled.cleaned
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.pj
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.locale.list
Add_appendix/books/Faire_sa_fleur_de_sel/.tmp
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.compiled
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.list
Add_appendix/books/Faire_sa_fleur_de_sel/Préparation
Add_appendix/books/Faire_sa_fleur_de_sel/Préparation/Préparation.author
Add_appendix/books/Faire_sa_fleur_de_sel/Préparation/Préparation.html
Add_appendix/books/Faire_sa_fleur_de_sel/Préparation/Préparation.RevisionId
Add_appendix/books/Faire_sa_fleur_de_sel/Préparation/Préparation.license
Add_appendix/books/Faire_sa_fleur_de_sel/Préparation/Préparation.str
Add_appendix/books/Faire_sa_fleur_de_sel/Préparation/Préparation.article
Add_appendix/books/Faire_sa_fleur_de_sel/Récolter
Add_appendix/books/Faire_sa_fleur_de_sel/Récolter/Récolter.tmp
Add_appendix/books/Faire_sa_fleur_de_sel/Récolter/Récolter.license
Add_appendix/books/Faire_sa_fleur_de_sel/Récolter/Récolter.html
Add_appendix/books/Faire_sa_fleur_de_sel/Récolter/Récolter.author
Add_appendix/books/Faire_sa_fleur_de_sel/Récolter/Récolter.article
Add_appendix/books/Faire_sa_fleur_de_sel/Récolter/Récolter.str
Add_appendix/books/Faire_sa_fleur_de_sel/Récolter/Récolter.RevisionId
Add_appendix/books/Faire_sa_fleur_de_sel/resources
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/fr.wikibooks.org
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/fr.wikibooks.org/robots.txt
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki/Wikilivres:Compilations
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/fr.wikibooks.org/wiki/Wikilivres:Compilations/Faire_sa_fleur_de_sel.html
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/extract-dd1
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.compiled
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/extract-li1
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/extract-dd
Add_appendix/books/Faire_sa_fleur_de_sel/resources/Faire_sa_fleur_de_sel/extract-li
Add_appendix/books/Faire_sa_fleur_de_sel/Introduction
Add_appendix/books/Faire_sa_fleur_de_sel/Introduction/Introduction.article
Add_appendix/books/Faire_sa_fleur_de_sel/Introduction/Introduction.html
Add_appendix/books/Faire_sa_fleur_de_sel/Introduction/Introduction.license
Add_appendix/books/Faire_sa_fleur_de_sel/Introduction/Introduction.RevisionId
Add_appendix/books/Faire_sa_fleur_de_sel/Introduction/Introduction.str
Add_appendix/books/Faire_sa_fleur_de_sel/Introduction/Introduction.author
Add_appendix/books/Faire_sa_fleur_de_sel/Faire_sa_fleur_de_sel.mainPage
Add_appendix/books/Faire_sa_fleur_de_sel/Couverture
Add_appendix/books/Faire_sa_fleur_de_sel/Couverture/Couverture.license
Add_appendix/books/Faire_sa_fleur_de_sel/Couverture/Couverture.article
Add_appendix/books/Faire_sa_fleur_de_sel/Couverture/Couverture.tmp
Add_appendix/books/Faire_sa_fleur_de_sel/Couverture/Couverture.RevisionId
Add_appendix/books/Faire_sa_fleur_de_sel/Couverture/Couverture.html
Add_appendix/books/Faire_sa_fleur_de_sel/Couverture/Couverture.author
Add_appendix/books/Faire_sa_fleur_de_sel/Couverture/Couverture.str

Annexe créée avec addappendix

  • À consulter :

Annexe de la compilation Faire_sa_fleur_de_sel, le 25 février 2022, avec la version internationalisée addappendix de l'empaquetage linux;
Noter que la traduction en français n'est pas encore réalisée.

  • À comparer avec la version de juin 2020 :

Annexe de Faire_fleurir_le_sel, Juin 2020, avec la toute nouvelle version Annexer