A propos
This repository contains all the documents and links of the Fidle Training.
The objectives of this training, co-organized by the Formation Permanente CNRS and the SARI and DEVLOG networks, are :
- Understanding the bases of deep learning neural networks (Deep Learning)
- Develop a first experience through simple and representative examples
- Understand the different types of networks, their architectures and their use cases.
- Understanding Tensorflow/Keras and Jupyter lab technologies on the GPU
- Apprehend the academic computing environments Tier-2 (meso) and/or Tier-1 (national)
Course materials
Get the course slides
Useful information is also available in the wiki
Jupyter notebooks :
-
Linear regression with direct resolution
Direct determination of linear regression -
Linear regression with gradient descent
An example of gradient descent in the simple case of a linear regression. -
Complexity Syndrome
Illustration of the problem of complexity with the polynomial regression -
Logistic regression, in pure Tensorflow
Logistic Regression with Mini-Batch Gradient Descent using pure TensorFlow. -
Regression with a Dense Network (DNN)
A Simple regression with a Dense Neural Network (DNN) - BHPD dataset -
Regression with a Dense Network (DNN) - Advanced code
More advanced example of DNN network code - BHPD dataset -
CNN with GTSRB dataset - Data analysis and preparation
Episode 1: Data analysis and creation of a usable dataset -
CNN with GTSRB dataset - First convolutions
Episode 2 : First convolutions and first results -
CNN with GTSRB dataset - Monitoring
Episode 3: Monitoring and analysing training, managing checkpoints -
CNN with GTSRB dataset - Data augmentation
Episode 4: Improving the results with data augmentation -
CNN with GTSRB dataset - Full convolutions
Episode 5: A lot of models, a lot of datasets and a lot of results. -
CNN with GTSRB dataset - Full convolutions as a batch
Episode 6 : Run Full convolution notebook as a batch -
Tensorboard with/from Jupyter
4 ways to use Tensorboard from the Jupyter environment -
Text embedding with IMDB
A very classical example of word embedding for text classification (sentiment analysis) -
Text embedding with IMDB - Reloaded
Example of reusing a previously saved model -
Text embedding/LSTM model with IMDB
Still the same problem, but with a network combining embedding and LSTM
Installation
To run this examples, you need an environment with the following packages :
- Python >3.5
- numpy
- Tensorflow 2.0
- scikit-image
- scikit-learn
- Matplotlib
- seaborn
- pyplot
You can install such a predefined environment :
conda env create -f environment.yml
To manage conda environment see there
Licence
[en] Attribution - NonCommercial - ShareAlike 4.0 International (CC BY-NC-SA 4.0)
[Fr] Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
See License.
See Disclaimer.