diff --git a/BHPD/01-DNN-Regression.ipynb b/BHPD/01-DNN-Regression.ipynb
index 01de329b1f0aa1e8d7569cba252e4c3af216da47..d6ac920766cb438a289d9e91cc0b80d315dade6c 100644
--- a/BHPD/01-DNN-Regression.ipynb
+++ b/BHPD/01-DNN-Regression.ipynb
@@ -7,19 +7,19 @@
     "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
     "\n",
     "# <!-- TITLE --> Regression with a Dense Network (DNN)\n",
-    "\n",
     "<!-- DESC --> A Simple regression with a Dense Neural Network (DNN) - BHPD dataset\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
     "## Objectives :\n",
     " - Predicts **housing prices** from a set of house features. \n",
-    " - Understanding the principle and the architecture of a regression with a dense neural network  \n",
+    " - Understanding the **principle** and the **architecture** of a regression with a **dense neural network**  \n",
     "\n",
     "\n",
     "The **[Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)** consists of price of houses in various places in Boston.  \n",
     "Alongside with price, the dataset also provide information such as Crime, areas of non-retail business in the town,  \n",
     "age of people who own the house and many other attributes...\n",
     "\n",
-    "What we're going to do:\n",
+    "## What we're going to do :\n",
     "\n",
     " - Retrieve data\n",
     " - Preparing the data\n",
diff --git a/BHPD/02-DNN-Regression-Premium.ipynb b/BHPD/02-DNN-Regression-Premium.ipynb
index 364a6d5c4a5a03ac80d501baddebf9bd9df3bb1c..311c5865a2f044d37eed591ecbb3607d7b43693f 100644
--- a/BHPD/02-DNN-Regression-Premium.ipynb
+++ b/BHPD/02-DNN-Regression-Premium.ipynb
@@ -7,8 +7,8 @@
     "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
     "\n",
     "# <!-- TITLE --> Regression with a Dense Network (DNN) - Advanced code\n",
-    "\n",
     "  <!-- DESC -->  More advanced example of DNN network code - BHPD dataset\n",
+    "  <!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
     "## Objectives :\n",
     " - Predicts **housing prices** from a set of house features. \n",
@@ -18,7 +18,7 @@
     "Alongside with price, the dataset also provide information such as Crime, areas of non-retail business in the town,  \n",
     "age of people who own the house and many other attributes...\n",
     "\n",
-    "What we're going to do:\n",
+    "## What we're going to do :\n",
     "\n",
     " - (Retrieve data)\n",
     " - (Preparing the data)\n",
@@ -1168,8 +1168,8 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "-----\n",
-    "That's all folks !"
+    "---\n",
+    "![](../fidle/img/00-Fidle-logo-01_s.png)"
    ]
   }
  ],
diff --git a/GTSRB/01-Preparation-of-data.ipynb b/GTSRB/01-Preparation-of-data.ipynb
index 77e1bd4a4dc73a33a7d28feed467e15241a93807..cb51503e17ebbdb414ebcd4da8542e567da6e3f7 100644
--- a/GTSRB/01-Preparation-of-data.ipynb
+++ b/GTSRB/01-Preparation-of-data.ipynb
@@ -4,12 +4,22 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "German Traffic Sign Recognition Benchmark (GTSRB)\n",
-    "=================================================\n",
-    "---\n",
-    "Introduction au Deep Learning  (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020  \n",
+    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "\n",
+    "# <!-- TITLE --> CNN with GTSRB dataset - Data analysis and preparation\n",
+    "<!-- DESC --> Episode 1: Data analysis and creation of a usable dataset\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
-    "## Episode 1 : Preparation of data\n",
+    "## Objectives :\n",
+    " - Understand the **complexity associated with data**, even when it is only images\n",
+    " - Learn how to build up a simple and **usable image dataset**\n",
+    "\n",
+    "The German Traffic Sign Recognition Benchmark (GTSRB) is a dataset with more than 50,000 photos of road signs from about 40 classes.  \n",
+    "The final aim is to recognise them !  \n",
+    "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
+    "\n",
+    "\n",
+    "## What we're going to do :\n",
     "\n",
     " - Understanding the dataset\n",
     " - Preparing and formatting enhanced data\n",
@@ -20,7 +30,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 1/ Import and init"
+    "## Step 1 -  Import and init"
    ]
   },
   {
@@ -100,12 +110,12 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 2/ Read the dataset\n",
+    "## Step 2 - Read the dataset\n",
     "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
     " - Each directory contains one CSV file with annotations (\"GT-<ClassID>.csv\") and the training images\n",
     " - First line is fieldnames: Filename;Width;Height;Roi.X1;Roi.Y1;Roi.X2;Roi.Y2;ClassId  \n",
     "    \n",
-    "### 2.1/ Usefull functions"
+    "### 2.1 - Usefull functions"
    ]
   },
   {
@@ -152,7 +162,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 2.2/ Read the data\n",
+    "### 2.2 - Read the data\n",
     "We will read the following datasets:\n",
     " - **x_train, y_train** : Learning data\n",
     " - **x_test, y_test** : Validation or test data\n",
@@ -202,10 +212,10 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 3/ Few statistics about train dataset\n",
+    "## Step 3 - Few statistics about train dataset\n",
     "We want to know if our images are homogeneous in terms of size, ratio, width or height.\n",
     "\n",
-    "### 3.1/ Do statistics "
+    "### 3.1 - Do statistics "
    ]
   },
   {
@@ -243,7 +253,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 3.2/ Show statistics"
+    "### 3.2 - Show statistics"
    ]
   },
   {
@@ -379,7 +389,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 4/ List of classes\n",
+    "## Step 4 - List of classes\n",
     "What are the 43 classes of our images..."
    ]
   },
@@ -408,7 +418,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 5/ What does it really look like"
+    "## Step 5 - What does it really look like"
    ]
   },
   {
@@ -438,7 +448,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 6/ dataset cooking...\n",
+    "## Step 6 - dataset cooking...\n",
     "\n",
     "Images must have the **same size** to match the size of the network.   \n",
     "It is possible to work on **rgb** or **monochrome** images and **equalize** the histograms.   \n",
@@ -448,7 +458,7 @@
     "See : [Local histogram equalization](https://scikit-image.org/docs/dev/api/skimage.filters.rank.html#skimage.filters.rank.equalize)  \n",
     "See : [Histogram equalization](https://scikit-image.org/docs/dev/api/skimage.exposure.html#skimage.exposure.equalize_hist)  \n",
     "\n",
-    "### 6.1/ Enhancement cook"
+    "### 6.1 - Enhancement cook"
    ]
   },
   {
@@ -523,7 +533,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 6.2/ To get an idea of the different recipes"
+    "### 6.2 - To get an idea of the different recipes"
    ]
   },
   {
@@ -562,7 +572,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 6.3/ Cook and save\n",
+    "### 6.3 - Cook and save\n",
     "A function to save a dataset"
    ]
   },
@@ -626,7 +636,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 7/ Reload data to be sure ;-)"
+    "## Step 7 - Reload data to be sure ;-)"
    ]
   },
   {
@@ -653,8 +663,8 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "----\n",
-    "That's all folks !"
+    "---\n",
+    "![](../fidle/img/00-Fidle-logo-01_s.png)"
    ]
   }
  ],
diff --git a/GTSRB/02-First-convolutions.ipynb b/GTSRB/02-First-convolutions.ipynb
index ac8dd91e596b4645ef010340ce2eb49adbdb1a9c..b8535119983d5b289eda2d0c2c10f359a3c3e5d3 100644
--- a/GTSRB/02-First-convolutions.ipynb
+++ b/GTSRB/02-First-convolutions.ipynb
@@ -4,20 +4,29 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "German Traffic Sign Recognition Benchmark (GTSRB)\n",
-    "=================================================\n",
-    "---\n",
-    "Introduction au Deep Learning  (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020  \n",
+    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "\n",
+    "# <!-- TITLE --> CNN with GTSRB dataset - First convolutions\n",
+    "<!-- DESC --> Episode 2 : First convolutions and first results\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
+    "\n",
+    "## Objectives :\n",
+    "  - Recognizing traffic signs \n",
+    "  - Understand the **principles** and **architecture** of a **convolutional neural network** for image classification\n",
+    "  \n",
+    "The German Traffic Sign Recognition Benchmark (GTSRB) is a dataset with more than 50,000 photos of road signs from about 40 classes.  \n",
+    "The final aim is to recognise them !  \n",
+    "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
     "\n",
-    "## Episode 2 : First Convolutions\n",
     "\n",
-    "Our main steps:\n",
+    "## What we're going to do :\n",
+    "\n",
     " - Read H5 dataset\n",
     " - Build a model\n",
     " - Train the model\n",
     " - Evaluate the model\n",
     "\n",
-    "## 1/ Import and init"
+    "## Step 1 - Import and init"
    ]
   },
   {
@@ -47,7 +56,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 2/ Load dataset\n",
+    "## Step 2 - Load dataset\n",
     "We're going to retrieve a previously recorded dataset.  \n",
     "For example: set-24x24-L"
    ]
@@ -84,7 +93,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 3/ Have a look to the dataset\n",
+    "## Step 3 - Have a look to the dataset\n",
     "We take a quick look as we go by..."
    ]
   },
@@ -107,7 +116,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 4/ Create model\n",
+    "## Step 4 - Create model\n",
     "We will now build a model and train it...\n",
     "\n",
     "Some models :"
@@ -199,7 +208,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 5/ Train the model\n",
+    "## Step 5 - Train the model\n",
     "**Get the shape of my data :**"
    ]
   },
@@ -292,6 +301,14 @@
     "print('Test loss      : {:5.4f}'.format(score[0]))\n",
     "print('Test accuracy  : {:5.4f}'.format(score[1]))"
    ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "---\n",
+    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+   ]
   }
  ],
  "metadata": {
@@ -310,7 +327,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.5"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,
diff --git a/GTSRB/03-Tracking-and-visualizing.ipynb b/GTSRB/03-Tracking-and-visualizing.ipynb
index c211ca36f6ccbe6843c878fd7b4572f537dc8bd8..5819f76d534a9471a886eca84e46da611532e770 100644
--- a/GTSRB/03-Tracking-and-visualizing.ipynb
+++ b/GTSRB/03-Tracking-and-visualizing.ipynb
@@ -4,20 +4,29 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "German Traffic Sign Recognition Benchmark (GTSRB)\n",
-    "=================================================\n",
-    "---\n",
-    "Introduction au Deep Learning  (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020\n",
+    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "\n",
+    "# <!-- TITLE --> CNN with GTSRB dataset - Monitoring \n",
+    "<!-- DESC --> Episode 3: Monitoring and analysing training, managing checkpoints\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
+    "\n",
+    "## Objectives :\n",
+    "  - **Understand** what happens during the **training** process\n",
+    "  - Implement **monitoring**, **backup** and **recovery** solutions\n",
+    "  \n",
+    "The German Traffic Sign Recognition Benchmark (GTSRB) is a dataset with more than 50,000 photos of road signs from about 40 classes.  \n",
+    "The final aim is to recognise them !  \n",
+    "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
     "\n",
-    "## Episode 3 : Tracking, visualizing and save models\n",
     "\n",
-    "Our main steps:\n",
+    "## What we're going to do :\n",
+    "\n",
     " - Monitoring and understanding our model training \n",
     " - Add recovery points\n",
     " - Analyze the results \n",
-    " - Restore and run recovery pont\n",
+    " - Restore and run recovery points\n",
     "\n",
-    "## 1/ Import and init"
+    "## Step 1 - Import and init"
    ]
   },
   {
@@ -51,7 +60,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 2/ Load dataset\n",
+    "## Step 2 - Load dataset\n",
     "Dataset is one of the saved dataset: RGB25, RGB35, L25, L35, etc.  \n",
     "First of all, we're going to use a smart dataset : **set-24x24-L**  \n",
     "(with a GPU, it only takes 35'' compared to more than 5' with a CPU !)"
@@ -91,7 +100,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 3/ Have a look to the dataset\n",
+    "## Step 3 - Have a look to the dataset\n",
     "Note: Data must be reshape for matplotlib"
    ]
   },
@@ -114,7 +123,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 4/ Create model\n",
+    "## Step 4 - Create model\n",
     "We will now build a model and train it...\n",
     "\n",
     "Some models... "
@@ -152,7 +161,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 5/ Prepare callbacks  \n",
+    "## Step 5 - Prepare callbacks  \n",
     "We will add 2 callbacks :  \n",
     " - **TensorBoard**  \n",
     "Training logs, which can be visualised with Tensorboard.  \n",
@@ -204,7 +213,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 5/ Train the model\n",
+    "## Step 6 - Train the model\n",
     "**Get the shape of my data :**"
    ]
   },
@@ -309,7 +318,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 6/ History\n",
+    "## Step 7 - History\n",
     "The return of model.fit() returns us the learning history"
    ]
   },
@@ -326,7 +335,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 7/ Evaluation and confusion"
+    "## Step 8 - Evaluation and confusion"
    ]
   },
   {
@@ -345,8 +354,8 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 8/ Restore and evaluate\n",
-    "### 8.1/ List saved models :"
+    "## Step 9 - Restore and evaluate\n",
+    "### 9.1 - List saved models :"
    ]
   },
   {
@@ -362,7 +371,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 8.2/ Restore a model :"
+    "### 9.2 - Restore a model :"
    ]
   },
   {
@@ -380,7 +389,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 8.3/ Evaluate it :"
+    "### 9.3 - Evaluate it :"
    ]
   },
   {
@@ -399,7 +408,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 8.4/ Make a prediction :"
+    "### 9.4 - Make a prediction :"
    ]
   },
   {
@@ -454,15 +463,8 @@
    "metadata": {},
    "source": [
     "---\n",
-    "That's all folks !"
+    "![](../fidle/img/00-Fidle-logo-01_s.png)"
    ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": []
   }
  ],
  "metadata": {
@@ -481,7 +483,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.5"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,
diff --git a/GTSRB/04-Data-augmentation.ipynb b/GTSRB/04-Data-augmentation.ipynb
index e8c9079b7b49bca3069cbb5b40d5dbc96442d851..8779785fe265a41666fde772ccedf5d159a6ae63 100644
--- a/GTSRB/04-Data-augmentation.ipynb
+++ b/GTSRB/04-Data-augmentation.ipynb
@@ -4,17 +4,26 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "German Traffic Sign Recognition Benchmark (GTSRB)\n",
-    "=================================================\n",
-    "---\n",
-    "Introduction au Deep Learning  (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020\n",
+    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "\n",
+    "# <!-- TITLE --> CNN with GTSRB dataset - Data augmentation \n",
+    "<!-- DESC --> Episode 4: Improving the results with data augmentation\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
+    "\n",
+    "## Objectives :\n",
+    "  - Trying to improve training by **enhancing the data**\n",
+    "  - Using Keras' **data augmentation utilities**, finding their limits...\n",
+    "  \n",
+    "The German Traffic Sign Recognition Benchmark (GTSRB) is a dataset with more than 50,000 photos of road signs from about 40 classes.  \n",
+    "The final aim is to recognise them !  \n",
+    "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
     "\n",
-    "## Episode 4 : Data augmentation\n",
     "\n",
-    "Our main steps:\n",
-    " - Increase and improve the learning dataset\n",
+    "## What we're going to do :\n",
+    " - Increase and improve the training dataset\n",
+    " - Identify the limits of these tools\n",
     "\n",
-    "## 1/ Import and init"
+    "## Step 1 - Import and init"
    ]
   },
   {
@@ -48,7 +57,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 2/ Dataset loader\n",
+    "## Step 2 - Dataset loader\n",
     "Dataset is one of the saved dataset: RGB25, RGB35, L25, L35, etc.  \n",
     "First of all, we're going to use a smart dataset : **set-24x24-L**  \n",
     "(with a GPU, it only takes 35'' compared to more than 5' with a CPU !)"
@@ -84,7 +93,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 3/ Models\n",
+    "## Step 3 - Models\n",
     "We will now build a model and train it...\n",
     "\n",
     "This is my model ;-) "
@@ -122,7 +131,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 4/ Callbacks  \n",
+    "## Step 4 - Callbacks  \n",
     "We prepare 2 kind callbacks :  TensorBoard and Model backup"
    ]
   },
@@ -165,8 +174,8 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 5/ Load and prepare dataset\n",
-    "### 5.1/ Load"
+    "## Step 5 - Load and prepare dataset\n",
+    "### 5.1 - Load"
    ]
   },
   {
@@ -182,7 +191,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 5.2/ Data augmentation"
+    "### 5.2 - Data augmentation"
    ]
   },
   {
@@ -205,7 +214,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 6/ Train the model\n",
+    "## Step 6 - Train the model\n",
     "**Get the shape of my data :**"
    ]
   },
@@ -309,7 +318,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 7/ History\n",
+    "## Step 7 - History\n",
     "The return of model.fit() returns us the learning history"
    ]
   },
@@ -326,14 +335,14 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 8/ Evaluate best model"
+    "## Step 8 - Evaluate best model"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 8.1/ Restore best model :"
+    "### 8.1 - Restore best model :"
    ]
   },
   {
@@ -351,7 +360,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### 8.2/ Evaluate it :"
+    "### 8.2 - Evaluate it :"
    ]
   },
   {
@@ -390,15 +399,8 @@
    "metadata": {},
    "source": [
     "---\n",
-    "That's all folks !"
+    "![](../fidle/img/00-Fidle-logo-01_s.png)"
    ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": []
   }
  ],
  "metadata": {
@@ -417,7 +419,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.5"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,
diff --git a/GTSRB/05-Full-convolutions.ipynb b/GTSRB/05-Full-convolutions.ipynb
index 5ae6b2870b747b7818c7e7157148cd20120d21e4..c1d833a0304086a330b369968767a83c0901cb52 100644
--- a/GTSRB/05-Full-convolutions.ipynb
+++ b/GTSRB/05-Full-convolutions.ipynb
@@ -4,19 +4,29 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "German Traffic Sign Recognition Benchmark (GTSRB)\n",
-    "=================================================\n",
-    "---\n",
-    "Introduction au Deep Learning  (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020  \n",
+    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "\n",
+    "# <!-- TITLE --> CNN with GTSRB dataset - Full convolutions \n",
+    "<!-- DESC --> Episode 5: A lot of models, a lot of datasets and a lot of results.\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
+    "\n",
+    "## Objectives :\n",
+    "  - Try multiple solutions\n",
+    "  - Design a generic and batch-usable code\n",
+    "  \n",
+    "The German Traffic Sign Recognition Benchmark (GTSRB) is a dataset with more than 50,000 photos of road signs from about 40 classes.  \n",
+    "The final aim is to recognise them !  \n",
+    "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
     "\n",
-    "## Episode 5 : Full Convolutions\n",
+    "\n",
+    "## What we're going to do :\n",
     "\n",
     "Our main steps:\n",
     " - Try n models with n datasets\n",
     " - Save a Pandas/h5 report\n",
     " - Write to be run in batch mode\n",
     "\n",
-    "## 1/ Import"
+    "## Step 1 - Import"
    ]
   },
   {
@@ -42,7 +52,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 2/ Init and start"
+    "## Step 2 - Init and start"
    ]
   },
   {
@@ -78,7 +88,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 3/ Dataset loading"
+    "## Step 3 - Dataset loading"
    ]
   },
   {
@@ -107,7 +117,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 4/ Models collection"
+    "## Step 4 - Models collection"
    ]
   },
   {
@@ -191,7 +201,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 5/ Multiple datasets, multiple models ;-)"
+    "## Step 5 - Multiple datasets, multiple models ;-)"
    ]
   },
   {
@@ -287,7 +297,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 6/ Run !"
+    "## Step 6 - Run !"
    ]
   },
   {
@@ -378,7 +388,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 7/ That's all folks.."
+    "## Step 7 - That's all folks.."
    ]
   },
   {
@@ -392,11 +402,12 @@
    ]
   },
   {
-   "cell_type": "code",
-   "execution_count": null,
+   "cell_type": "markdown",
    "metadata": {},
-   "outputs": [],
-   "source": []
+   "source": [
+    "---\n",
+    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+   ]
   }
  ],
  "metadata": {
@@ -415,7 +426,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.5"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,
diff --git a/GTSRB/05.1-Full-convolutions-batch.ipynb b/GTSRB/06-Full-convolutions-batch.ipynb
similarity index 76%
rename from GTSRB/05.1-Full-convolutions-batch.ipynb
rename to GTSRB/06-Full-convolutions-batch.ipynb
index 19d5b3e2d5b3eed902ff3cc9234f82109d822853..d1204e310ffb6dbe3b7f5d66160d85304dd3fc0b 100644
--- a/GTSRB/05.1-Full-convolutions-batch.ipynb
+++ b/GTSRB/06-Full-convolutions-batch.ipynb
@@ -4,22 +4,31 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "German Traffic Sign Recognition Benchmark (GTSRB)\n",
-    "=================================================\n",
-    "---\n",
-    "Introduction au Deep Learning  (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020  \n",
+    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "\n",
+    "# <!-- TITLE --> CNN with GTSRB dataset - Full convolutions as a batch\n",
+    "<!-- DESC --> Episode 6 : Run Full convolution notebook as a batch\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
-    "## Episode 5.1 : Full Convolutions / run\n",
+    "## Objectives :\n",
+    "  - Run a notebook code as a **job**\n",
+    "  - Follow up with Tensorboard\n",
+    "  \n",
+    "The German Traffic Sign Recognition Benchmark (GTSRB) is a dataset with more than 50,000 photos of road signs from about 40 classes.  \n",
+    "The final aim is to recognise them !  \n",
+    "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
     "\n",
+    "\n",
+    "## What we're going to do :\n",
     "Our main steps:\n",
     " - Run Full-convolution.ipynb as a batch :\n",
     "    - Notebook mode\n",
     "    - Script mode \n",
     " - Tensorboard follow up\n",
     "    \n",
-    "## 1/ Run a notebook as a batch\n",
-    "To run a notebook :  \n",
-    "```jupyter nbconvert --to notebook --execute <notebook>```"
+    "## Step 1 - Run a notebook as a batch\n",
+    "To run a notebook in a command line :  \n",
+    "```jupyter nbconvert (...) --to notebook --execute <notebook>```"
    ]
   },
   {
@@ -37,7 +46,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 2/ Export as a script (better choice)\n",
+    "## Step 2 - Export as a script (What we're going to do !)\n",
     "To export a notebook as a script :  \n",
     "```jupyter nbconvert --to script <notebook>```  \n",
     "To run the script :  \n",
@@ -87,7 +96,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 3/ Batch submission\n",
+    "## Step 2 - Batch submission\n",
     "Create batch script :"
    ]
   },
@@ -110,11 +119,11 @@
     "#OAR -n Full convolutions\n",
     "#OAR -t gpu\n",
     "#OAR -l /nodes=1/gpudevice=1,walltime=01:00:00\n",
-    "#OAR --stdout _batch/full_convolutions_%jobid%.out\n",
-    "#OAR --stderr _batch/full_convolutions_%jobid%.err\n",
-    "#OAR --project deeplearningshs\n",
+    "#OAR --stdout full_convolutions_%jobid%.out\n",
+    "#OAR --stderr full_convolutions_%jobid%.err\n",
+    "#OAR --project fidle\n",
     "\n",
-    "#---- For cpu\n",
+    "#---- With cpu\n",
     "# use :\n",
     "# OAR -l /nodes=1/core=32,walltime=01:00:00\n",
     "# and add a 2>/dev/null to ipython xxx\n",
@@ -177,15 +186,17 @@
    "metadata": {},
    "source": [
     "%%bash\n",
-    "./run/batch_full_convolutions.sh"
+    "./run/batch_full_convolutions.sh\n",
+    "oarsub (...)"
    ]
   },
   {
-   "cell_type": "code",
-   "execution_count": null,
+   "cell_type": "markdown",
    "metadata": {},
-   "outputs": [],
-   "source": []
+   "source": [
+    "---\n",
+    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+   ]
   }
  ],
  "metadata": {
@@ -204,7 +215,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.5"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,
diff --git a/GTSRB/99 Scripts-Tensorboard.ipynb b/GTSRB/99-Scripts-Tensorboard.ipynb
similarity index 100%
rename from GTSRB/99 Scripts-Tensorboard.ipynb
rename to GTSRB/99-Scripts-Tensorboard.ipynb
diff --git a/README.md b/README.md
index 353d301e4ed82f6f8b6afcab67ad708dcfc95955..88e6b3dfd687037448d077875b36bc1a3b8d3e9b 100644
--- a/README.md
+++ b/README.md
@@ -12,20 +12,21 @@ The objectives of this training, co-organized by the Formation Permanente CNRS a
  - Understanding Tensorflow/Kera**s and Jupyter lab** technologies on the GPU
  - Apprehend the **academic computing environments** Tier-2 (meso) and/or Tier-1 (national)
 
-## Available at this depot:
-You will find here :
- - the support of the presentations
- - all the practical work, in the form of Jupyter notebooks
- - sheets and practical information :
-   - **[Configuration SSH](../-/wikis/howto-ssh)**
-
-- [Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb)<br>
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;A Simple regression with a Dense Neural Network (DNN) - BHPD dataset
-
-- [Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)<br>
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;More advanced example of DNN network code - BHPD dataset
+## Support and notebooks
+Get the **[support of the presentations](Bientot)**  
+Note that useful information is also available in the **[wiki](https://gricad-gitlab.univ-grenoble-alpes.fr/talks/fidle/-/wikis/home)**  
 
+All examples and practical work are available as Jupyter notebooks :
 
+<!-- INDEX -->
+<!-- INDEX_BEGIN -->
+1. [Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;A Simple regression with a Dense Neural Network (DNN) - BHPD dataset
+1. [Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;More advanced example of DNN network code - BHPD dataset
+1. [CNN with GTSRB dataset - Data analysis and preparation](GTSRB/01-Preparation-of-data.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 1: Data analysis and creation of a usable dataset
+<!-- INDEX_END -->