A propos
This repository contains all the documents and links of the Fidle Training .
Fidle (for Formation Introduction au Deep Learning) is a 2-day training session
co-organized by the Formation Permanente CNRS and the SARI and DEVLOG networks.
The objectives of this training are :
- Understanding the bases of Deep Learning neural networks
- Develop a first experience through simple and representative examples
- Understanding Tensorflow/Keras and Jupyter lab technologies
- Apprehend the academic computing environments Tier-2 or Tier-1 with powerfull GPU
For more information, you can contact us at :
Current Version :
1.2b1 DEV
Course materials
Course slides The course in pdf format (12 Mo) |
Notebooks Get a Zip or clone this repository (10 Mo) |
Datasets All the needed datasets (1.2 Go) |
Have a look about How to get and install these notebooks and datasets.
Jupyter notebooks
Linear and logistic regression
-
LINR1 - Linear regression with direct resolution
Low-level implementation, using numpy, of a direct resolution for a linear regression -
GRAD1 - Linear regression with gradient descent
Low level implementation of a solution by gradient descent. Basic and stochastic approach. -
POLR1 - Complexity Syndrome
Illustration of the problem of complexity with the polynomial regression -
LOGR1 - Logistic regression
Simple example of logistic regression with a sklearn solution
Perceptron Model 1957
-
PER57 - Perceptron Model 1957
Example of use of a Perceptron, with sklearn and IRIS dataset of 1936 !
Basic regression using DNN
-
BHPD1 - Regression with a Dense Network (DNN)
Simple example of a regression with the dataset Boston Housing Prices Dataset (BHPD) -
BHPD2 - Regression with a Dense Network (DNN) - Advanced code
A more advanced implementation of the precedent example
Basic classification using a DNN
-
MNIST1 - Simple classification with DNN
An example of classification using a dense neural network for the famous MNIST dataset
Images classification with Convolutional Neural Networks (CNN)
-
GTSRB1 - CNN with GTSRB dataset - Data analysis and preparation
Episode 1 : Data analysis and creation of a usable dataset -
GTSRB2 - CNN with GTSRB dataset - First convolutions
Episode 2 : First convolutions and first results -
GTSRB3 - CNN with GTSRB dataset - Monitoring
Episode 3 : Monitoring and analysing training, managing checkpoints -
GTSRB4 - CNN with GTSRB dataset - Data augmentation
Episode 4 : Improving the results with data augmentation -
GTSRB5 - CNN with GTSRB dataset - Full convolutions
Episode 5 : A lot of models, a lot of datasets and a lot of results. -
GTSRB6 - Full convolutions as a batch
Episode 6 : Run Full convolution notebook as a batch -
GTSRB7 - CNN with GTSRB dataset - Show reports
Episode 7 : Displaying a jobs report -
GTSRB10 - OAR batch submission
Bash script for OAR batch submission of GTSRB notebook -
GTSRB11 - SLURM batch script
Bash script for SLURM batch submission of GTSRB notebooks
Sentiment analysis with word embedding
-
IMDB1 - Text embedding with IMDB
A very classical example of word embedding for text classification (sentiment analysis) -
IMDB2 - Text embedding with IMDB - Reloaded
Example of reusing a previously saved model -
IMDB3 - Text embedding/LSTM model with IMDB
Still the same problem, but with a network combining embedding and LSTM
Time series with Recurrent Neural Network (RNN)
-
SYNOP1 - Time series with RNN - Preparation of data
Episode 1 : Data analysis and creation of a usable dataset -
SYNOP2 - Time series with RNN - Try a prediction
Episode 2 : Training session and first predictions -
SYNOP3 - Time series with RNN - 12h predictions
Episode 3: Attempt to predict in the longer term
Unsupervised learning with an autoencoder neural network (AE)
-
AE1 - AutoEncoder (AE) with MNIST
Episode 1 : Model construction and Training -
AE2 - AutoEncoder (AE) with MNIST - Analysis
Episode 2 : Exploring our denoiser
Generative network with Variational Autoencoder (VAE)
-
VAE1 - Variational AutoEncoder (VAE) with MNIST
Building a simple model with the MNIST dataset -
VAE2 - Variational AutoEncoder (VAE) with MNIST - Analysis
Visualization and analysis of latent space -
VAE3 - About the CelebA dataset
Presentation of the CelebA dataset and problems related to its size -
VAE6 - Preparation of the CelebA dataset
Preparation of a clustered dataset, batchable -
VAE7 - Checking the clustered CelebA dataset
Check the clustered dataset -
VAE8 - Variational AutoEncoder (VAE) with CelebA (small)
Variational AutoEncoder (VAE) with CelebA (small res. 128x128) -
VAE9 - Variational AutoEncoder (VAE) with CelebA - Analysis
Exploring latent space of our trained models -
VAE10 - SLURM batch script
Bash script for SLURM batch submission of VAE notebooks
Miscellaneous
-
ACTF1 - Activation functions
Some activation functions, with their derivatives. -
NP1 - A short introduction to Numpy
Numpy is an essential tool for the Scientific Python. -
TSB1 - Tensorboard with/from Jupyter
4 ways to use Tensorboard from the Jupyter environment
Installation
A procedure for configuring and starting Jupyter is available in the Wiki.
Licence
[en] Attribution - NonCommercial - ShareAlike 4.0 International (CC BY-NC-SA 4.0)
[Fr] Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
See License.
See Disclaimer.