diff --git a/README.ipynb b/README.ipynb
index e0c9db998ebe8d0ed5ae71d36c96281db666f797..02c812c1b4317310ed4b9c33b2af1d13d0ddf5eb 100644
--- a/README.ipynb
+++ b/README.ipynb
@@ -77,12 +77,16 @@
        "      Episode 4 : Improving the results with data augmentation  \n",
        "[[GTS5] - CNN with GTSRB dataset - Full convolutions ](GTSRB/05-Full-convolutions.ipynb)  \n",
        "      Episode 5 : A lot of models, a lot of datasets and a lot of results.  \n",
-       "[[GTS6] - CNN with GTSRB dataset - Full convolutions as a batch](GTSRB/06-Full-convolutions-batch.ipynb)  \n",
+       "[[GTS6] - CNN with GTSRB dataset - Full convolutions as a batch](GTSRB/06-Notebook-as-a-batch.ipynb)  \n",
        "      Episode 6 : Run Full convolution notebook as a batch  \n",
-       "[[GTS7] - Full convolutions Report](GTSRB/07-Full-convolutions-reports.ipynb)  \n",
+       "[[GTS7] - CNN with GTSRB dataset - Show reports](GTSRB/07-Show-report.ipynb)  \n",
        "      Episode 7 : Displaying the reports of the different jobs  \n",
        "[[TSB1] - Tensorboard with/from Jupyter ](GTSRB/99-Scripts-Tensorboard.ipynb)  \n",
        "      4 ways to use Tensorboard from the Jupyter environment  \n",
+       "[[BASH1] - OAR batch script](GTSRB/batch_oar.sh)  \n",
+       "      Bash script for OAR batch submission of GTSRB notebook  \n",
+       "[[BASH2] - SLURM batch script](GTSRB/batch_slurm.sh)  \n",
+       "      Bash script for SLURM batch submission of GTSRB notebooks  \n",
        "[[IMDB1] - Text embedding with IMDB](IMDB/01-Embedding-Keras.ipynb)  \n",
        "      A very classical example of word embedding for text classification (sentiment analysis)  \n",
        "[[IMDB2] - Text embedding with IMDB - Reloaded](IMDB/02-Prediction.ipynb)  \n",
@@ -97,11 +101,13 @@
        "      Episode 3: Attempt to predict in the longer term   \n",
        "[[VAE1] - Variational AutoEncoder (VAE) with MNIST](VAE/01-VAE-with-MNIST.ipynb)  \n",
        "      Episode 1 : Model construction and Training  \n",
+       "[[VAE1] - Variational AutoEncoder (VAE) with MNIST](VAE/01-VAE-with-MNIST.nbconvert.ipynb)  \n",
+       "      Episode 1 : Model construction and Training  \n",
        "[[VAE2] - Variational AutoEncoder (VAE) with MNIST - Analysis](VAE/02-VAE-with-MNIST-post.ipynb)  \n",
        "      Episode 2 : Exploring our latent space  \n",
        "[[VAE3] - About the CelebA dataset](VAE/03-About-CelebA.ipynb)  \n",
-       "      Episode 3 : About the CelebA dataset, a more fun dataset !  \n",
-       "[[VAE4] - Preparation of the CelebA dataset](VAE/04-Prepare-CelebA-batch.ipynb)  \n",
+       "      Episode 3 : About the CelebA dataset, a more fun dataset ;-)  \n",
+       "[[VAE4] - Preparation of the CelebA dataset](VAE/04-Prepare-CelebA-datasets.ipynb)  \n",
        "      Episode 4 : Preparation of a clustered dataset, batchable  \n",
        "[[VAE5] - Checking the clustered CelebA dataset](VAE/05-Check-CelebA.ipynb)  \n",
        "      Episode 5 :\\tChecking the clustered dataset  \n",
@@ -111,10 +117,10 @@
        "      Episode 7 : Variational AutoEncoder (VAE) with CelebA (medium res.)  \n",
        "[[VAE8] - Variational AutoEncoder (VAE) with CelebA - Analysis](VAE/08-VAE-withCelebA-post.ipynb)  \n",
        "      Episode 8 : Exploring latent space of our trained models  \n",
-       "[[BASH1] - OAR batch script](VAE/batch-oar.sh)  \n",
-       "      Bash script for OAR batch submission of a notebook  \n",
-       "[[BASH2] - SLURM batch script](VAE/batch-slurm.sh)  \n",
-       "      Bash script for SLURM batch submission of a notebook  \n",
+       "[[BASH1] - OAR batch script](VAE/batch_oar.sh)  \n",
+       "      Bash script for OAR batch submission of VAE notebook  \n",
+       "[[BASH2] - SLURM batch script](VAE/batch_slurm.sh)  \n",
+       "      Bash script for SLURM batch submission of VAE notebooks  \n",
        "[[ACTF1] - Activation functions](Misc/Activation-Functions.ipynb)  \n",
        "      Some activation functions, with their derivatives.  \n",
        "[[NP1] - A short introduction to Numpy](Misc/Numpy.ipynb)  \n",
diff --git a/README.md b/README.md
index 58aa36ba5460b46b37921e1e5167e02225095ebf..c1b956200aec750a9e8aed4f27bec129586f12b2 100644
--- a/README.md
+++ b/README.md
@@ -92,7 +92,7 @@ Some other useful informations are also available in the [wiki](https://gricad-g
 [[VAE2] - Variational AutoEncoder (VAE) with MNIST - Analysis](VAE/02-VAE-with-MNIST-post.ipynb)  
       Episode 2 : Exploring our latent space  
 [[VAE3] - About the CelebA dataset](VAE/03-About-CelebA.ipynb)  
-      Episode 3 : About the CelebA dataset, a more fun dataset !  
+      Episode 3 : About the CelebA dataset, a more fun dataset ;-)  
 [[VAE4] - Preparation of the CelebA dataset](VAE/04-Prepare-CelebA-datasets.ipynb)  
       Episode 4 : Preparation of a clustered dataset, batchable  
 [[VAE5] - Checking the clustered CelebA dataset](VAE/05-Check-CelebA.ipynb)  
diff --git a/VAE/03-About-CelebA.ipynb b/VAE/03-About-CelebA.ipynb
index 61f7ee72e3816d9a4195682d7f4d50f950d51d6c..f052600d78fa108fa1411648724b75b013966c5f 100644
--- a/VAE/03-About-CelebA.ipynb
+++ b/VAE/03-About-CelebA.ipynb
@@ -7,7 +7,7 @@
     "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
     "# <!-- TITLE --> [VAE3] - About the CelebA dataset\n",
-    "<!-- DESC --> Episode 3 : About the CelebA dataset, a more fun dataset !\n",
+    "<!-- DESC --> Episode 3 : About the CelebA dataset, a more fun dataset ;-)\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
     "## Objectives :\n",