A propos
This repository contains all the documents and links of the Fidle Training .
Fidle (for Formation Introduction au Deep Learning) is a 2-day training session
co-organized by the Formation Permanente CNRS and the SARI and DEVLOG networks.
The objectives of this training are :
- Understanding the bases of Deep Learning neural networks
- Develop a first experience through simple and representative examples
- Understanding Tensorflow/Keras and Jupyter lab technologies
- Apprehend the academic computing environments Tier-2 or Tier-1 with powerfull GPU
For more information, you can contact us at :
Current Version :
1.2b1 DEV
Course materials
Course slides The course in pdf format (12 Mo) |
Notebooks Get a Zip or clone this repository (10 Mo) |
Datasets All the needed datasets (1.2 Go) |
Have a look about How to get and install these notebooks and datasets.
Jupyter notebooks
Linear and logistic regression
-
LINR1 - Linear regression with direct resolution
Low-level implementation, using numpy, of a direct resolution for a linear regression -
GRAD1 - Linear regression with gradient descent
Low level implementation of a solution by gradient descent. Basic and stochastic approach. -
POLR1 - Complexity Syndrome
Illustration of the problem of complexity with the polynomial regression -
LOGR1 - Logistic regression
Simple example of logistic regression with a sklearn solution
Perceptron Model 1957
-
PER57 - Perceptron Model 1957
Example of use of a Perceptron, with sklearn and IRIS dataset of 1936 !
Basic regression using DNN
-
BHPD1 - Regression with a Dense Network (DNN)
Simple example of a regression with the dataset Boston Housing Prices Dataset (BHPD) -
BHPD2 - Regression with a Dense Network (DNN) - Advanced code
A more advanced implementation of the precedent example
Basic classification using a DNN
-
MNIST1 - Simple classification with DNN
An example of classification using a dense neural network for the famous MNIST dataset
Images classification with Convolutional Neural Networks (CNN)
-
GTSRB1 - Dataset analysis and preparation
Episode 1 : Analysis of the GTSRB dataset and creation of an enhanced dataset -
GTSRB2 - First convolutions
Episode 2 : First convolutions and first classification of our traffic signs -
GTSRB3 - Training monitoring
Episode 3 : Monitoring, analysis and check points during a training session -
GTSRB4 - Data augmentation
Episode 4 : Adding data by data augmentation when we lack it, to improve our results -
GTSRB5 - Full convolutions
Episode 5 : A lot of models, a lot of datasets and a lot of results. -
GTSRB6 - Full convolutions as a batch
Episode 6 : To compute bigger, use your notebook in batch mode -
GTSRB7 - Batch reportss
Episode 7 : Displaying our jobs report, and the winner is... -
GTSRB10 - OAR batch script submission
Bash script for an OAR batch submission of an ipython code -
GTSRB11 - SLURM batch script
Bash script for a Slurm batch submission of an ipython code
Sentiment analysis with word embedding
-
IMDB1 - Sentiment alalysis with text embedding
A very classical example of word embedding with a dataset from Internet Movie Database (IMDB) -
IMDB2 - Reload and reuse a saved model
Retrieving a saved model to perform a sentiment analysis (movie review) -
IMDB3 - Sentiment analysis with a LSTM network
Still the same problem, but with a network combining embedding and LSTM
Time series with Recurrent Neural Network (RNN)
-
SYNOP1 - Preparation of data
Episode 1 : Data analysis and preparation of a meteorological dataset (SYNOP) -
SYNOP2 - First predictions at 3h
Episode 2 : Learning session and weather prediction attempt at 3h -
SYNOP3 - 12h predictions
Episode 3: Attempt to predict in a more longer term
Unsupervised learning with an autoencoder neural network (AE)
-
AE1 - Building and training an AE denoiser model
Episode 1 : After construction, the model is trained with noisy data from the MNIST dataset. -
AE2 - Exploring our denoiser model
Episode 2 : Using the previously trained autoencoder to denoise data
Generative network with Variational Autoencoder (VAE)
-
VAE1 - First VAE, with a small dataset (MNIST)
Construction and training of a VAE with a latent space of small dimension. -
VAE2 - Analysis of the associated latent space
Visualization and analysis of the VAE's latent space -
VAE5 - Another game play : About the CelebA dataset
Episode 1 : Presentation of the CelebA dataset and problems related to its size -
VAE6 - Generation of a clustered dataset
Episode 2 : Analysis of the CelebA dataset and creation of an clustered and usable dataset -
VAE7 - Checking the clustered dataset
Episode : 3 Clustered dataset verification and testing of our datagenerator -
VAE8 - Training session for our VAE
Episode 4 : Training with our clustered datasets in notebook or batch mode -
VAE9 - Data generation from latent space
Episode 5 : Exploring latent space to generate new data -
VAE10 - SLURM batch script
Bash script for SLURM batch submission of VAE8 notebooks
Miscellaneous
-
ACTF1 - Activation functions
Some activation functions, with their derivatives. -
NP1 - A short introduction to Numpy
Numpy is an essential tool for the Scientific Python. -
TSB1 - Tensorboard with/from Jupyter
4 ways to use Tensorboard from the Jupyter environment
Installation
A procedure for configuring and starting Jupyter is available in the Wiki.
Licence
[en] Attribution - NonCommercial - ShareAlike 4.0 International (CC BY-NC-SA 4.0)
[Fr] Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
See License.
See Disclaimer.