diff --git a/BHPD/01-DNN-Regression.ipynb b/BHPD/01-DNN-Regression.ipynb
index 1fbeea66baff9d2a58abaf6609ac8d33ba29aecf..57cca597a6c388e04c536e9ae81689f87a31e840 100644
--- a/BHPD/01-DNN-Regression.ipynb
+++ b/BHPD/01-DNN-Regression.ipynb
@@ -7,7 +7,7 @@
     "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
     "\n",
-    "# <!-- TITLE --> [REG1] - Regression with a Dense Network (DNN)\n",
+    "# <!-- TITLE --> [BHP1] - Regression with a Dense Network (DNN)\n",
     "<!-- DESC --> A Simple regression with a Dense Neural Network (DNN) - BHPD dataset\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
diff --git a/BHPD/02-DNN-Regression-Premium.ipynb b/BHPD/02-DNN-Regression-Premium.ipynb
index 350a322bc68135e8dd778e2089d52ff2e38bd9dc..98e58005e85db8113603bde2bcb8c31e6715dce3 100644
--- a/BHPD/02-DNN-Regression-Premium.ipynb
+++ b/BHPD/02-DNN-Regression-Premium.ipynb
@@ -6,7 +6,7 @@
    "source": [
     "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> [REG2] - Regression with a Dense Network (DNN) - Advanced code\n",
+    "# <!-- TITLE --> [BHP2] - Regression with a Dense Network (DNN) - Advanced code\n",
     "  <!-- DESC -->  More advanced example of DNN network code - BHPD dataset\n",
     "  <!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
diff --git a/GTSRB/04-Data-augmentation.ipynb b/GTSRB/04-Data-augmentation.ipynb
index 8779785fe265a41666fde772ccedf5d159a6ae63..31227ea323022b30f98f42660ea1c895c6b55134 100644
--- a/GTSRB/04-Data-augmentation.ipynb
+++ b/GTSRB/04-Data-augmentation.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> CNN with GTSRB dataset - Data augmentation \n",
+    "# <!-- TITLE --> [GTS4] - CNN with GTSRB dataset - Data augmentation \n",
     "<!-- DESC --> Episode 4: Improving the results with data augmentation\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -399,7 +399,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/GTSRB/05-Full-convolutions.ipynb b/GTSRB/05-Full-convolutions.ipynb
index c1d833a0304086a330b369968767a83c0901cb52..3e30a98e72aa94f6e6868f527f39f3b7a6ace8a8 100644
--- a/GTSRB/05-Full-convolutions.ipynb
+++ b/GTSRB/05-Full-convolutions.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> CNN with GTSRB dataset - Full convolutions \n",
+    "# <!-- TITLE --> [GTS5] - CNN with GTSRB dataset - Full convolutions \n",
     "<!-- DESC --> Episode 5: A lot of models, a lot of datasets and a lot of results.\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -406,7 +406,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/GTSRB/06-Full-convolutions-batch.ipynb b/GTSRB/06-Full-convolutions-batch.ipynb
index d1204e310ffb6dbe3b7f5d66160d85304dd3fc0b..ec2f4807f6f89f23575752c716077e1281ae5da9 100644
--- a/GTSRB/06-Full-convolutions-batch.ipynb
+++ b/GTSRB/06-Full-convolutions-batch.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> CNN with GTSRB dataset - Full convolutions as a batch\n",
+    "# <!-- TITLE --> [GTS6] - CNN with GTSRB dataset - Full convolutions as a batch\n",
     "<!-- DESC --> Episode 6 : Run Full convolution notebook as a batch\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -195,7 +195,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/GTSRB/05.2-Full-convolutions-reports.ipynb b/GTSRB/07-Full-convolutions-reports.ipynb
similarity index 98%
rename from GTSRB/05.2-Full-convolutions-reports.ipynb
rename to GTSRB/07-Full-convolutions-reports.ipynb
index 877b5b1631b3c872a51c6dc9d37226226e4327bf..d5eb2d904e686d74af3b4cd1f3a81a68389b62e0 100644
--- a/GTSRB/05.2-Full-convolutions-reports.ipynb
+++ b/GTSRB/07-Full-convolutions-reports.ipynb
@@ -4,17 +4,23 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "German Traffic Sign Recognition Benchmark (GTSRB)\n",
-    "=================================================\n",
-    "---\n",
-    "Introduction au Deep Learning  (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020  \n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
+    "\n",
+    "# <!-- TITLE --> [GTS7] - Full convolutions Report\n",
+    "<!-- DESC --> Displaying the reports of the different jobs\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
+    "\n",
+    "## Objectives :\n",
+    " - Compare the results of different dataset-model combinations\n",
+    "\n",
+    "Les rapports (format json) sont générés par les jobs \"Full convolution\" [GTS5][GTS6]\n",
     "\n",
-    "## Episode 5.2 : Full Convolutions Reports\n",
     "\n",
-    "Ou main steps :\n",
-    " - Show reports\n",
+    "## What we're going to do :\n",
     "\n",
-    "## 1/ Import"
+    " - Read json files and display results\n",
+    "\n",
+    "## 1/ Python import"
    ]
   },
   {
@@ -640,18 +646,12 @@
    ]
   },
   {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": []
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
+   "cell_type": "markdown",
    "metadata": {},
-   "outputs": [],
-   "source": []
+   "source": [
+    "---\n",
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
+   ]
   }
  ],
  "metadata": {
@@ -670,7 +670,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.5"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,
diff --git a/GTSRB/99-Scripts-Tensorboard.ipynb b/GTSRB/99-Scripts-Tensorboard.ipynb
index fbee18e458fbe3042bdc6b876858d226a7bf45e0..2bcec6018804fbaea31fbf92cbb5ec09c649e212 100644
--- a/GTSRB/99-Scripts-Tensorboard.ipynb
+++ b/GTSRB/99-Scripts-Tensorboard.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> Tensorboard with/from Jupyter \n",
+    "# <!-- TITLE --> [TSB1] - Tensorboard with/from Jupyter \n",
     "<!-- DESC --> 4 ways to use Tensorboard from the Jupyter environment\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -179,7 +179,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/IMDB/01-Embedding-Keras.ipynb b/IMDB/01-Embedding-Keras.ipynb
index ad3c1f594795e2012815b0e9b0a240fe577910a1..8c25562dd396972976f4f3a235b2170ea29f1b01 100644
--- a/IMDB/01-Embedding-Keras.ipynb
+++ b/IMDB/01-Embedding-Keras.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> Text embedding with IMDB\n",
+    "# <!-- TITLE --> [IMDB1] - Text embedding with IMDB\n",
     "<!-- DESC --> A very classical example of word embedding for text classification (sentiment analysis)\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -750,7 +750,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/IMDB/02-Prediction.ipynb b/IMDB/02-Prediction.ipynb
index f3eafda0f97e85fa3f63de3f76dc4833d2fe5cb6..f9474b9fd7929b1897389111a29468d4c53517cd 100644
--- a/IMDB/02-Prediction.ipynb
+++ b/IMDB/02-Prediction.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> Text embedding with IMDB - Reloaded\n",
+    "# <!-- TITLE --> [IMDB2] - Text embedding with IMDB - Reloaded\n",
     "<!-- DESC --> Example of reusing a previously saved model\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -294,7 +294,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/IMDB/03-LSTM-Keras.ipynb b/IMDB/03-LSTM-Keras.ipynb
index db456f1a75bc1f9371513f619d82e6535d5ab184..869ffdca5cb34e7341b52c59eaf3ed901b38b2c5 100644
--- a/IMDB/03-LSTM-Keras.ipynb
+++ b/IMDB/03-LSTM-Keras.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![Fidle](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> Text embedding/LSTM model with IMDB\n",
+    "# <!-- TITLE --> [IMDB3] - Text embedding/LSTM model with IMDB\n",
     "<!-- DESC --> Still the same problem, but with a network combining embedding and LSTM\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -416,7 +416,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/LinearReg/01-Linear-Regression.ipynb b/LinearReg/01-Linear-Regression.ipynb
index 73e0abb867a4a8369c65aa6d93d850f0b0f5aa41..1c3d16223b9451cc181ac2e3c434a2ea96b2547b 100644
--- a/LinearReg/01-Linear-Regression.ipynb
+++ b/LinearReg/01-Linear-Regression.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![header1](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> Linear regression with direct resolution\n",
+    "# <!-- TITLE --> [LINR1] - Linear regression with direct resolution\n",
     "<!-- DESC --> Direct determination of linear regression \n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -254,7 +254,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/LinearReg/02-Gradient-descent.ipynb b/LinearReg/02-Gradient-descent.ipynb
index 00f722e6d04a0aefeda4b55b0c0dcd6140b3aa16..cae799eb9d37919e73f8519239fb504dba4377a6 100644
--- a/LinearReg/02-Gradient-descent.ipynb
+++ b/LinearReg/02-Gradient-descent.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![header1](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> Linear regression with gradient descent\n",
+    "# <!-- TITLE --> [GRAD1] - Linear regression with gradient descent\n",
     "<!-- DESC --> An example of gradient descent in the simple case of a linear regression.\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -545,7 +545,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/LinearReg/03-Polynomial-Regression.ipynb b/LinearReg/03-Polynomial-Regression.ipynb
index bdb2c5bed1d2753c8b20a91762fc51342c4c986d..474d2b80cdd84d68dce3df8a49311f8e25908c77 100644
--- a/LinearReg/03-Polynomial-Regression.ipynb
+++ b/LinearReg/03-Polynomial-Regression.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![header1](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> Complexity Syndrome\n",
+    "# <!-- TITLE --> [FIT1] - Complexity Syndrome\n",
     "<!-- DESC --> Illustration of the problem of complexity with the polynomial regression\n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -409,7 +409,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/LinearReg/04-Logistic-Regression.ipynb b/LinearReg/04-Logistic-Regression.ipynb
index 2e3e74ca94f0b004e23449d029b4f06b15e639e8..0e9a293a184c87c11df835bde13fff1ffd337a20 100644
--- a/LinearReg/04-Logistic-Regression.ipynb
+++ b/LinearReg/04-Logistic-Regression.ipynb
@@ -4,9 +4,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "![header1](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# <!-- TITLE --> Logistic regression, in pure Tensorflow\n",
+    "# <!-- TITLE --> [LOGR1] - Logistic regression, in pure Tensorflow\n",
     "<!-- DESC --> Logistic Regression with Mini-Batch Gradient Descent using pure TensorFlow. \n",
     "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
     "\n",
@@ -821,7 +821,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/MNIST/01-DNN-MNIST.ipynb b/MNIST/01-DNN-MNIST.ipynb
index 7fc8612ac4b7c15407e74e2620889522120aa36a..5fc6d017e5b2a9e3546d3a479e05dedcab1bfe76 100644
--- a/MNIST/01-DNN-MNIST.ipynb
+++ b/MNIST/01-DNN-MNIST.ipynb
@@ -4,13 +4,22 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Deep Neural Network (DNN) - MNIST dataset\n",
-    "=========================================\n",
-    "---\n",
-    "Introduction au Deep Learning  (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020  \n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
+    "\n",
+    "# <!-- TITLE --> [MNIST1] - Simple classification with DNN\n",
+    "<!-- DESC --> Example of classification with a fully connected neural network\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
+    "\n",
+    "## Objectives :\n",
+    " - Understanding the principle of a classifier DNN network \n",
+    " - Implementation with Keras \n",
+    "\n",
+    "\n",
+    "The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) (Modified National Institute of Standards and Technology) is a must for Deep Learning.  \n",
+    "It consists of 60,000 small images of handwritten numbers for learning and 10,000 for testing.\n",
+    "\n",
     "\n",
-    "## A very simple example of **classification** :\n",
-    "...but a must-have example, a classic !\n",
+    "## What we're going to do :\n",
     "\n",
     " - Retrieve data\n",
     " - Preparing the data\n",
@@ -961,11 +970,12 @@
    ]
   },
   {
-   "cell_type": "code",
-   "execution_count": null,
+   "cell_type": "markdown",
    "metadata": {},
-   "outputs": [],
-   "source": []
+   "source": [
+    "---\n",
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
+   ]
   }
  ],
  "metadata": {
@@ -984,7 +994,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.5"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,
diff --git a/Prerequisites/Numpy.ipynb b/Prerequisites/Numpy.ipynb
index 9dc65aba45b314b0ccb2113d81c05195255342b6..d6279c9d8539e9b1d679ac939570e0b78cfe862c 100644
--- a/Prerequisites/Numpy.ipynb
+++ b/Prerequisites/Numpy.ipynb
@@ -8,10 +8,16 @@
     }
    },
    "source": [
-    "![header1](../fidle/img/00-Fidle-header-01.png)\n",
+    "<img width=\"800px\" src=\"../fidle/img/00-Fidle-header-01.svg\"></img>\n",
     "\n",
-    "# A short introduction to Numpy\n",
-    "Strongly inspired by the UGA Python Introduction Course  \n",
+    "# <!-- TITLE --> [NP1] - A short introduction to Numpy\n",
+    "<!-- DESC --> Numpy is an essential tool for the Scientific Python.\n",
+    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
+    "\n",
+    "## Objectives :\n",
+    " - Comprendre les grands principes de Numpy et son potentiel\n",
+    "\n",
+    "Note : This notebook is strongly inspired by the UGA Python Introduction Course  \n",
     "See : **https://gricad-gitlab.univ-grenoble-alpes.fr/python-uga/py-training-2017**"
    ]
   },
@@ -836,7 +842,7 @@
    "metadata": {},
    "source": [
     "---\n",
-    "![](../fidle/img/00-Fidle-logo-01_s.png)"
+    "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
   }
  ],
diff --git a/README.md b/README.md
index e2364c96ecdfa07c0075fb988a894bc30ad351b5..ff43f757ccc491747c1c5893388b0e0add87b67d 100644
--- a/README.md
+++ b/README.md
@@ -29,17 +29,21 @@ Useful information is also available in the [wiki](https://gricad-gitlab.univ-gr
 <!-- DO NOT REMOVE THIS TAG !!! -->
 <!-- INDEX -->
 <!-- INDEX_BEGIN -->
-1. [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb)<br>
+1. [[NP1] - A short introduction to Numpy](Prerequisites/Numpy.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Numpy is an essential tool for the Scientific Python.
+1. [[LINR1] - Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Direct determination of linear regression 
-1. [Linear regression with gradient descent](LinearReg/02-Gradient-descent.ipynb)<br>
+1. [[GRAD1] - Linear regression with gradient descent](LinearReg/02-Gradient-descent.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;An example of gradient descent in the simple case of a linear regression.
-1. [Complexity Syndrome](LinearReg/03-Polynomial-Regression.ipynb)<br>
+1. [[FIT1] - Complexity Syndrome](LinearReg/03-Polynomial-Regression.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Illustration of the problem of complexity with the polynomial regression
-1. [Logistic regression, in pure Tensorflow](LinearReg/04-Logistic-Regression.ipynb)<br>
+1. [[LOGR1] - Logistic regression, in pure Tensorflow](LinearReg/04-Logistic-Regression.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Logistic Regression with Mini-Batch Gradient Descent using pure TensorFlow. 
-1. [[REG1] - Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb)<br>
+1. [[MNIST1] - Simple classification with DNN](MNIST/01-DNN-MNIST.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Example of classification with a fully connected neural network
+1. [[BHP1] - Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;A Simple regression with a Dense Neural Network (DNN) - BHPD dataset
-1. [[REG2] - Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)<br>
+1. [[BHP2] - Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;More advanced example of DNN network code - BHPD dataset
 1. [[GTS1] - CNN with GTSRB dataset - Data analysis and preparation](GTSRB/01-Preparation-of-data.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 1: Data analysis and creation of a usable dataset
@@ -47,19 +51,21 @@ Useful information is also available in the [wiki](https://gricad-gitlab.univ-gr
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 2 : First convolutions and first results
 1. [[GTS3] - CNN with GTSRB dataset - Monitoring ](GTSRB/03-Tracking-and-visualizing.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 3: Monitoring and analysing training, managing checkpoints
-1. [CNN with GTSRB dataset - Data augmentation ](GTSRB/04-Data-augmentation.ipynb)<br>
+1. [[GTS4] - CNN with GTSRB dataset - Data augmentation ](GTSRB/04-Data-augmentation.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 4: Improving the results with data augmentation
-1. [CNN with GTSRB dataset - Full convolutions ](GTSRB/05-Full-convolutions.ipynb)<br>
+1. [[GTS5] - CNN with GTSRB dataset - Full convolutions ](GTSRB/05-Full-convolutions.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 5: A lot of models, a lot of datasets and a lot of results.
-1. [CNN with GTSRB dataset - Full convolutions as a batch](GTSRB/06-Full-convolutions-batch.ipynb)<br>
+1. [[GTS6] - CNN with GTSRB dataset - Full convolutions as a batch](GTSRB/06-Full-convolutions-batch.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 6 : Run Full convolution notebook as a batch
-1. [Tensorboard with/from Jupyter ](GTSRB/99-Scripts-Tensorboard.ipynb)<br>
+1. [[GTS7] - Full convolutions Report](GTSRB/07-Full-convolutions-reports.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Displaying the reports of the different jobs
+1. [[TSB1] - Tensorboard with/from Jupyter ](GTSRB/99-Scripts-Tensorboard.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4 ways to use Tensorboard from the Jupyter environment
-1. [Text embedding with IMDB](IMDB/01-Embedding-Keras.ipynb)<br>
+1. [[IMDB1] - Text embedding with IMDB](IMDB/01-Embedding-Keras.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;A very classical example of word embedding for text classification (sentiment analysis)
-1. [Text embedding with IMDB - Reloaded](IMDB/02-Prediction.ipynb)<br>
+1. [[IMDB2] - Text embedding with IMDB - Reloaded](IMDB/02-Prediction.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Example of reusing a previously saved model
-1. [Text embedding/LSTM model with IMDB](IMDB/03-LSTM-Keras.ipynb)<br>
+1. [[IMDB3] - Text embedding/LSTM model with IMDB](IMDB/03-LSTM-Keras.ipynb)<br>
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Still the same problem, but with a network combining embedding and LSTM
 <!-- INDEX_END -->
 
diff --git a/README.md.old b/README.md.old
new file mode 100644
index 0000000000000000000000000000000000000000..e2364c96ecdfa07c0075fb988a894bc30ad351b5
--- /dev/null
+++ b/README.md.old
@@ -0,0 +1,81 @@
+[<img width="600px" src="fidle/img/00-Fidle-titre-01.svg"></img>](#)
+
+## A propos
+
+This repository contains all the documents and links of the **Fidle Training**.  
+
+The objectives of this training, co-organized by the Formation Permanente CNRS and the SARI and DEVLOG networks, are :
+ - Understanding the **bases of deep learning** neural networks (Deep Learning)
+ - Develop a **first experience** through simple and representative examples
+ - Understand the different types of networks, their **architectures** and their **use cases**.
+ - Understanding **Tensorflow/Keras and Jupyter lab** technologies on the GPU
+ - Apprehend the **academic computing environments** Tier-2 (meso) and/or Tier-1 (national)
+
+## Course materials
+**[<img width="50px" src="fidle/img/00-Fidle-pdf.svg"></img>
+Get the course slides](https://cloud.univ-grenoble-alpes.fr/index.php/s/z7XZA36xKkMcaTS)**  
+
+
+
+<!-- ![pdf](fidle/img/00-Fidle-pdf.png) -->
+Useful information is also available in the [wiki](https://gricad-gitlab.univ-grenoble-alpes.fr/talks/fidle/-/wikis/home)
+
+
+## Jupyter notebooks
+
+[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/git/https%3A%2F%2Fgricad-gitlab.univ-grenoble-alpes.fr%2Ftalks%2Fdeeplearning.git/master?urlpath=lab/tree/index.ipynb)
+
+
+<!-- DO NOT REMOVE THIS TAG !!! -->
+<!-- INDEX -->
+<!-- INDEX_BEGIN -->
+1. [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Direct determination of linear regression 
+1. [Linear regression with gradient descent](LinearReg/02-Gradient-descent.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;An example of gradient descent in the simple case of a linear regression.
+1. [Complexity Syndrome](LinearReg/03-Polynomial-Regression.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Illustration of the problem of complexity with the polynomial regression
+1. [Logistic regression, in pure Tensorflow](LinearReg/04-Logistic-Regression.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Logistic Regression with Mini-Batch Gradient Descent using pure TensorFlow. 
+1. [[REG1] - Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;A Simple regression with a Dense Neural Network (DNN) - BHPD dataset
+1. [[REG2] - Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;More advanced example of DNN network code - BHPD dataset
+1. [[GTS1] - CNN with GTSRB dataset - Data analysis and preparation](GTSRB/01-Preparation-of-data.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 1: Data analysis and creation of a usable dataset
+1. [[GTS2] - CNN with GTSRB dataset - First convolutions](GTSRB/02-First-convolutions.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 2 : First convolutions and first results
+1. [[GTS3] - CNN with GTSRB dataset - Monitoring ](GTSRB/03-Tracking-and-visualizing.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 3: Monitoring and analysing training, managing checkpoints
+1. [CNN with GTSRB dataset - Data augmentation ](GTSRB/04-Data-augmentation.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 4: Improving the results with data augmentation
+1. [CNN with GTSRB dataset - Full convolutions ](GTSRB/05-Full-convolutions.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 5: A lot of models, a lot of datasets and a lot of results.
+1. [CNN with GTSRB dataset - Full convolutions as a batch](GTSRB/06-Full-convolutions-batch.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Episode 6 : Run Full convolution notebook as a batch
+1. [Tensorboard with/from Jupyter ](GTSRB/99-Scripts-Tensorboard.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4 ways to use Tensorboard from the Jupyter environment
+1. [Text embedding with IMDB](IMDB/01-Embedding-Keras.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;A very classical example of word embedding for text classification (sentiment analysis)
+1. [Text embedding with IMDB - Reloaded](IMDB/02-Prediction.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Example of reusing a previously saved model
+1. [Text embedding/LSTM model with IMDB](IMDB/03-LSTM-Keras.ipynb)<br>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Still the same problem, but with a network combining embedding and LSTM
+<!-- INDEX_END -->
+
+
+
+## Installation
+
+A procedure for **configuring** and **starting Jupyter** is available in the **[Wiki](https://gricad-gitlab.univ-grenoble-alpes.fr/talks/fidle/-/wikis/howto-jupyter)**.
+
+## Licence
+
+\[en\] Attribution - NonCommercial - ShareAlike 4.0 International (CC BY-NC-SA 4.0)  
+\[Fr\] Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International  
+See [License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).  
+See [Disclaimer](https://creativecommons.org/licenses/by-nc-sa/4.0/#).  
+
+
+----
+[<img width="80px" src="fidle/img/00-Fidle-logo-01.svg"></img>](#)
\ No newline at end of file
diff --git a/fidle/Charte.ipynb b/fidle/Charte.ipynb
index c69dff70c5824f044d8ff664b611df24ad79d65c..fbd6f0fd67366e60fc095edf625720b10cca80c5 100644
--- a/fidle/Charte.ipynb
+++ b/fidle/Charte.ipynb
@@ -30,6 +30,13 @@
     "---\n",
     "<img width=\"80px\" src=\"../fidle/img/00-Fidle-logo-01.svg\"></img>"
    ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
   }
  ],
  "metadata": {