Newer
Older
"iopub.execute_input": "2022-03-29T20:08:36.649716Z",
"iopub.status.busy": "2022-03-29T20:08:36.646330Z",
"iopub.status.idle": "2022-03-29T20:08:36.658519Z",
"shell.execute_reply": "2022-03-29T20:08:36.658045Z"
"jupyter": {
"source_hidden": true
}
},
"outputs": [
{
"data": {
"text/markdown": [
"<a name=\"top\"></a>\n",
"\n",
"[<img width=\"600px\" src=\"fidle/img/00-Fidle-titre-01.svg\"></img>](#top)\n",
"<!-- --------------------------------------------------- -->\n",
"<!-- To correctly view this README under Jupyter Lab -->\n",
"<!-- Open the notebook: README.ipynb! -->\n",
"<!-- --------------------------------------------------- -->\n",
"\n",
"This repository contains all the documents and links of the **Fidle Training** . \n",
"Fidle (for Formation Introduction au Deep Learning) is a 2-day training session \n",
"co-organized by the Formation Permanente CNRS and the Resinfo/SARI and DevLOG CNRS networks. \n",
"The objectives of this training are :\n",
" - Understanding the **bases of Deep Learning** neural networks\n",
" - Develop a **first experience** through simple and representative examples\n",
" - Understanding **Tensorflow/Keras** and **Jupyter lab** technologies\n",
" - Apprehend the **academic computing environments** Tier-2 or Tier-1 with powerfull GPU\n",
"\n",
"For more information, see **https://fidle.cnrs.fr** :\n",
"- **[Presentation of the training](https://fidle.cnrs.fr/presentation)**\n",
"- **[Program 2021/2022](https://fidle.cnrs.fr/programme)**\n",
"- [Subscribe to the list](https://fidle.cnrs.fr/listeinfo), to stay informed !\n",
"- [Find us on youtube](https://fidle.cnrs.fr/youtube)\n",
"- [Corrected notebooks](https://fidle.cnrs.fr/done)\n",
"[<img width=\"200px\" style=\"vertical-align:middle\" src=\"fidle/img/00-Mail_contact.svg\"></img>](#top)\n",
"Current Version : <!-- VERSION_BEGIN -->\n",
"| | | | |\n",
"|:--:|:--:|:--:|:--:|\n",
"| **[<img width=\"50px\" src=\"fidle/img/00-Fidle-pdf.svg\"></img><br>Course slides](https://fidle.cnrs.fr/supports)**<br>The course in pdf format<br>(12 Mo)| **[<img width=\"50px\" src=\"fidle/img/00-Notebooks.svg\"></img><br>Notebooks](https://fidle.cnrs.fr/notebooks)**<br> Get a Zip or clone this repository <br>(40 Mo)| **[<img width=\"50px\" src=\"fidle/img/00-Datasets-tar.svg\"></img><br>Datasets](https://fidle.cnrs.fr/fidle-datasets.tar)**<br>All the needed datasets<br>(1.2 Go)|**[<img width=\"50px\" src=\"fidle/img/00-Videos.svg\"></img><br>Videos](https://fidle.cnrs.fr/youtube)**<br> Our Youtube channel <br> |\n",
"Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)** these notebooks and datasets.\n",
"\n",
"\n",
"## Jupyter notebooks\n",
"\n",
"<!-- INDEX_BEGIN -->\n",
"### Linear and logistic regression\n",
"- **[LINR1](LinearReg/01-Linear-Regression.ipynb)** - [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb) \n",
"Low-level implementation, using numpy, of a direct resolution for a linear regression\n",
"- **[GRAD1](LinearReg/02-Gradient-descent.ipynb)** - [Linear regression with gradient descent](LinearReg/02-Gradient-descent.ipynb) \n",
"Low level implementation of a solution by gradient descent. Basic and stochastic approach.\n",
"- **[POLR1](LinearReg/03-Polynomial-Regression.ipynb)** - [Complexity Syndrome](LinearReg/03-Polynomial-Regression.ipynb) \n",
"Illustration of the problem of complexity with the polynomial regression\n",
"- **[LOGR1](LinearReg/04-Logistic-Regression.ipynb)** - [Logistic regression](LinearReg/04-Logistic-Regression.ipynb) \n",
"Simple example of logistic regression with a sklearn solution\n",
"\n",
"### Perceptron Model 1957\n",
"- **[PER57](IRIS/01-Simple-Perceptron.ipynb)** - [Perceptron Model 1957](IRIS/01-Simple-Perceptron.ipynb) \n",
"Example of use of a Perceptron, with sklearn and IRIS dataset of 1936 !\n",
"\n",
"### Basic regression using DNN\n",
"- **[BHPD1](BHPD/01-DNN-Regression.ipynb)** - [Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb) \n",
"Simple example of a regression with the dataset Boston Housing Prices Dataset (BHPD)\n",
"- **[BHPD2](BHPD/02-DNN-Regression-Premium.ipynb)** - [Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb) \n",
"A more advanced implementation of the precedent example\n",
"\n",
"### Basic classification using a DNN\n",
"- **[MNIST1](MNIST/01-DNN-MNIST.ipynb)** - [Simple classification with DNN](MNIST/01-DNN-MNIST.ipynb) \n",
"An example of classification using a dense neural network for the famous MNIST dataset\n",
"- **[MNIST2](MNIST/02-CNN-MNIST.ipynb)** - [Simple classification with CNN](MNIST/02-CNN-MNIST.ipynb) \n",
"An example of classification using a convolutional neural network for the famous MNIST dataset\n",
"\n",
"### Images classification with Convolutional Neural Networks (CNN)\n",
"- **[GTSRB1](GTSRB/01-Preparation-of-data.ipynb)** - [Dataset analysis and preparation](GTSRB/01-Preparation-of-data.ipynb) \n",
"Episode 1 : Analysis of the GTSRB dataset and creation of an enhanced dataset\n",
"- **[GTSRB2](GTSRB/02-First-convolutions.ipynb)** - [First convolutions](GTSRB/02-First-convolutions.ipynb) \n",
"Episode 2 : First convolutions and first classification of our traffic signs\n",
"- **[GTSRB3](GTSRB/03-Tracking-and-visualizing.ipynb)** - [Training monitoring](GTSRB/03-Tracking-and-visualizing.ipynb) \n",
"Episode 3 : Monitoring, analysis and check points during a training session\n",
"- **[GTSRB4](GTSRB/04-Data-augmentation.ipynb)** - [Data augmentation ](GTSRB/04-Data-augmentation.ipynb) \n",
"Episode 4 : Adding data by data augmentation when we lack it, to improve our results\n",
"- **[GTSRB5](GTSRB/05-Full-convolutions.ipynb)** - [Full convolutions](GTSRB/05-Full-convolutions.ipynb) \n",
"Episode 5 : A lot of models, a lot of datasets and a lot of results.\n",
"- **[GTSRB6](GTSRB/06-Notebook-as-a-batch.ipynb)** - [Full convolutions as a batch](GTSRB/06-Notebook-as-a-batch.ipynb) \n",
"Episode 6 : To compute bigger, use your notebook in batch mode\n",
"- **[GTSRB7](GTSRB/07-Show-report.ipynb)** - [Batch reports](GTSRB/07-Show-report.ipynb) \n",
"Episode 7 : Displaying our jobs report, and the winner is...\n",
"- **[GTSRB10](GTSRB/batch_oar.sh)** - [OAR batch script submission](GTSRB/batch_oar.sh) \n",
"Bash script for an OAR batch submission of an ipython code\n",
"- **[GTSRB11](GTSRB/batch_slurm.sh)** - [SLURM batch script](GTSRB/batch_slurm.sh) \n",
"Bash script for a Slurm batch submission of an ipython code\n",
"\n",
"### Sentiment analysis with word embedding\n",
"- **[IMDB1](IMDB/01-One-hot-encoding.ipynb)** - [Sentiment analysis with hot-one encoding](IMDB/01-One-hot-encoding.ipynb) \n",
"A basic example of sentiment analysis with sparse encoding, using a dataset from Internet Movie Database (IMDB)\n",
"- **[IMDB2](IMDB/02-Keras-embedding.ipynb)** - [Sentiment analysis with text embedding](IMDB/02-Keras-embedding.ipynb) \n",
"A very classical example of word embedding with a dataset from Internet Movie Database (IMDB)\n",
"- **[IMDB3](IMDB/03-Prediction.ipynb)** - [Reload and reuse a saved model](IMDB/03-Prediction.ipynb) \n",
"Retrieving a saved model to perform a sentiment analysis (movie review)\n",
"- **[IMDB4](IMDB/04-Show-vectors.ipynb)** - [Reload embedded vectors](IMDB/04-Show-vectors.ipynb) \n",
"Retrieving embedded vectors from our trained model\n",
"- **[IMDB5](IMDB/05-LSTM-Keras.ipynb)** - [Sentiment analysis with a RNN network](IMDB/05-LSTM-Keras.ipynb) \n",
"Still the same problem, but with a network combining embedding and RNN\n",
"\n",
"### Time series with Recurrent Neural Network (RNN)\n",
"- **[LADYB1](SYNOP/LADYB1-Ladybug.ipynb)** - [Prediction of a 2D trajectory via RNN](SYNOP/LADYB1-Ladybug.ipynb) \n",
"Artificial dataset generation and prediction attempt via a recurrent network\n",
"- **[SYNOP1](SYNOP/SYNOP1-Preparation-of-data.ipynb)** - [Preparation of data](SYNOP/SYNOP1-Preparation-of-data.ipynb) \n",
"Episode 1 : Data analysis and preparation of a usuable meteorological dataset (SYNOP)\n",
"- **[SYNOP2](SYNOP/SYNOP2-First-predictions.ipynb)** - [First predictions at 3h](SYNOP/SYNOP2-First-predictions.ipynb) \n",
"Episode 2 : RNN training session for weather prediction attempt at 3h\n",
"- **[SYNOP3](SYNOP/SYNOP3-12h-predictions.ipynb)** - [12h predictions](SYNOP/SYNOP3-12h-predictions.ipynb) \n",
"Episode 3: Attempt to predict in a more longer term \n",
"### Sentiment analysis with transformers\n",
"- **[TRANS1](Transformers/01-Distilbert.ipynb)** - [IMDB, Sentiment analysis with Transformers ](Transformers/01-Distilbert.ipynb) \n",
"Using a Tranformer to perform a sentiment analysis (IMDB) - Jean Zay version\n",
"- **[TRANS2](Transformers/02-distilbert_colab.ipynb)** - [IMDB, Sentiment analysis with Transformers ](Transformers/02-distilbert_colab.ipynb) \n",
"Using a Tranformer to perform a sentiment analysis (IMDB) - Colab version\n",
"\n",
"### Unsupervised learning with an autoencoder neural network (AE)\n",
"- **[AE1](AE/01-Prepare-MNIST-dataset.ipynb)** - [Prepare a noisy MNIST dataset](AE/01-Prepare-MNIST-dataset.ipynb) \n",
"Episode 1: Preparation of a noisy MNIST dataset\n",
"- **[AE2](AE/02-AE-with-MNIST.ipynb)** - [Building and training an AE denoiser model](AE/02-AE-with-MNIST.ipynb) \n",
"Episode 1 : Construction of a denoising autoencoder and training of it with a noisy MNIST dataset.\n",
"- **[AE3](AE/03-AE-with-MNIST-post.ipynb)** - [Playing with our denoiser model](AE/03-AE-with-MNIST-post.ipynb) \n",
"Episode 2 : Using the previously trained autoencoder to denoise data\n",
"- **[AE4](AE/04-ExtAE-with-MNIST.ipynb)** - [Denoiser and classifier model](AE/04-ExtAE-with-MNIST.ipynb) \n",
"Episode 4 : Construction of a denoiser and classifier model\n",
"- **[AE5](AE/05-ExtAE-with-MNIST.ipynb)** - [Advanced denoiser and classifier model](AE/05-ExtAE-with-MNIST.ipynb) \n",
"Episode 5 : Construction of an advanced denoiser and classifier model\n",
"\n",
"### Generative network with Variational Autoencoder (VAE)\n",
"- **[VAE1](VAE/01-VAE-with-MNIST.ipynb)** - [First VAE, using functional API (MNIST dataset)](VAE/01-VAE-with-MNIST.ipynb) \n",
"Construction and training of a VAE, using functional APPI, with a latent space of small dimension.\n",
"- **[VAE2](VAE/02-VAE-with-MNIST.ipynb)** - [VAE, using a custom model class (MNIST dataset)](VAE/02-VAE-with-MNIST.ipynb) \n",
"Construction and training of a VAE, using model subclass, with a latent space of small dimension.\n",
"- **[VAE3](VAE/03-VAE-with-MNIST-post.ipynb)** - [Analysis of the VAE's latent space of MNIST dataset](VAE/03-VAE-with-MNIST-post.ipynb) \n",
"Visualization and analysis of the VAE's latent space of the dataset MNIST\n",
"- **[VAE5](VAE/05-About-CelebA.ipynb)** - [Another game play : About the CelebA dataset](VAE/05-About-CelebA.ipynb) \n",
"Episode 1 : Presentation of the CelebA dataset and problems related to its size\n",
"- **[VAE6](VAE/06-Prepare-CelebA-datasets.ipynb)** - [Generation of a clustered dataset](VAE/06-Prepare-CelebA-datasets.ipynb) \n",
"Episode 2 : Analysis of the CelebA dataset and creation of an clustered and usable dataset\n",
"- **[VAE7](VAE/07-Check-CelebA.ipynb)** - [Checking the clustered dataset](VAE/07-Check-CelebA.ipynb) \n",
"Episode : 3 Clustered dataset verification and testing of our datagenerator\n",
"- **[VAE8](VAE/08-VAE-with-CelebA.ipynb)** - [Training session for our VAE](VAE/08-VAE-with-CelebA.ipynb) \n",
"Episode 4 : Training with our clustered datasets in notebook or batch mode\n",
"- **[VAE9](VAE/09-VAE-with-CelebA-192x160.ipynb)** - [Training session for our VAE with 192x160 images](VAE/09-VAE-with-CelebA-192x160.ipynb) \n",
"Episode 4 : Training with our clustered datasets in notebook or batch mode\n",
"- **[VAE10](VAE/batch_slurm.sh)** - [SLURM batch script](VAE/batch_slurm.sh) \n",
"Bash script for SLURM batch submission of VAE8 notebooks \n",
"### Generative Adversarial Networks (GANs)\n",
"- **[SHEEP1](DCGAN/01-DCGAN-Draw-me-a-sheep.ipynb)** - [A first DCGAN to Draw a Sheep](DCGAN/01-DCGAN-Draw-me-a-sheep.ipynb) \n",
"Episode 1 : Draw me a sheep, revisited with a DCGAN\n",
"- **[SHEEP2](DCGAN/02-WGANGP-Draw-me-a-sheep.ipynb)** - [A WGAN-GP to Draw a Sheep](DCGAN/02-WGANGP-Draw-me-a-sheep.ipynb) \n",
"Episode 2 : Draw me a sheep, revisited with a WGAN-GP\n",
"### Deep Reinforcement Learning (DRL)\n",
"- **[DRL1](DRL/FIDLE_DQNfromScratch.ipynb)** - [Solving CartPole with DQN](DRL/FIDLE_DQNfromScratch.ipynb) \n",
"Using a a Deep Q-Network to play CartPole - an inverted pendulum problem (PyTorch)\n",
"- **[DRL2](DRL/FIDLE_rl_baselines_zoo.ipynb)** - [RL Baselines3 Zoo: Training in Colab](DRL/FIDLE_rl_baselines_zoo.ipynb) \n",
"Demo of Stable baseline3 with Colab\n",
"\n",
"### Miscellaneous\n",
"- **[ACTF1](Misc/Activation-Functions.ipynb)** - [Activation functions](Misc/Activation-Functions.ipynb) \n",
"Some activation functions, with their derivatives.\n",
"- **[NP1](Misc/Numpy.ipynb)** - [A short introduction to Numpy](Misc/Numpy.ipynb) \n",
"Numpy is an essential tool for the Scientific Python.\n",
"- **[SCRATCH1](Misc/Scratchbook.ipynb)** - [Scratchbook](Misc/Scratchbook.ipynb) \n",
"A scratchbook for small examples\n",
"- **[TSB1](Misc/Using-Tensorboard.ipynb)** - [Tensorboard with/from Jupyter ](Misc/Using-Tensorboard.ipynb) \n",
"4 ways to use Tensorboard from the Jupyter environment\n",
"<!-- INDEX_END -->\n",
"\n",
"\n",
"## Installation\n",
"\n",
"Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)** these notebooks and datasets.\n",
"\n",
"## Licence\n",
"\n",
"[<img width=\"100px\" src=\"fidle/img/00-fidle-CC BY-NC-SA.svg\"></img>](https://creativecommons.org/licenses/by-nc-sa/4.0/) \n",
"\\[en\\] Attribution - NonCommercial - ShareAlike 4.0 International (CC BY-NC-SA 4.0) \n",
"\\[Fr\\] Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International \n",
"See [License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). \n",
"See [Disclaimer](https://creativecommons.org/licenses/by-nc-sa/4.0/#). \n",
"\n",
"\n",
"----\n",
"[<img width=\"80px\" src=\"fidle/img/00-Fidle-logo-01.svg\"></img>](#top)\n"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"from IPython.display import display,Markdown\n",
"display(Markdown(open('README.md', 'r').read()))\n",
"#\n",
"# This README is visible under Jupiter LAb ! :-)"
]
}
],
"metadata": {
"kernelspec": {
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",