Skip to content
Snippets Groups Projects
Commit 0ec5dca7 authored by Jean-Luc Parouty's avatar Jean-Luc Parouty
Browse files

Minor corrections (v2.1b6)

parent 1ce956a5
No related branches found
No related tags found
No related merge requests found
%% Cell type:code id:d32ade6b tags: %% Cell type:code id:56ec7e61 tags:
``` python ``` python
from IPython.display import display,Markdown from IPython.display import display,Markdown
display(Markdown(open('README.md', 'r').read())) display(Markdown(open('README.md', 'r').read()))
# #
# This README is visible under Jupiter Lab ;-)# Automatically generated on : 13/10/22 00:58:06 # This README is visible under Jupiter Lab ;-)# Automatically generated on : 13/10/22 10:19:33
``` ```
%% Output %% Output
<a name="top"></a> <a name="top"></a>
[<img width="600px" src="fidle/img/00-Fidle-titre-01.svg"></img>](#top) [<img width="600px" src="fidle/img/00-Fidle-titre-01.svg"></img>](#top)
<!-- --------------------------------------------------- --> <!-- --------------------------------------------------- -->
<!-- To correctly view this README under Jupyter Lab --> <!-- To correctly view this README under Jupyter Lab -->
<!-- Open the notebook: README.ipynb! --> <!-- Open the notebook: README.ipynb! -->
<!-- --------------------------------------------------- --> <!-- --------------------------------------------------- -->
## About Fidle ## About Fidle
This repository contains all the documents and links of the **Fidle Training** . This repository contains all the documents and links of the **Fidle Training** .
Fidle (for Formation Introduction au Deep Learning) is a 3-day training session co-organized Fidle (for Formation Introduction au Deep Learning) is a 3-day training session co-organized
by the 3IA MIAI institute, the CNRS, via the Mission for Transversal and Interdisciplinary by the 3IA MIAI institute, the CNRS, via the Mission for Transversal and Interdisciplinary
Initiatives (MITI) and the University of Grenoble Alpes (UGA). Initiatives (MITI) and the University of Grenoble Alpes (UGA).
The objectives of this training are : The objectives of this training are :
- Understanding the **bases of Deep Learning** neural networks - Understanding the **bases of Deep Learning** neural networks
- Develop a **first experience** through simple and representative examples - Develop a **first experience** through simple and representative examples
- Understanding **Tensorflow/Keras** and **Jupyter lab** technologies - Understanding **Tensorflow/Keras** and **Jupyter lab** technologies
- Apprehend the **academic computing environments** Tier-2 or Tier-1 with powerfull GPU - Apprehend the **academic computing environments** Tier-2 or Tier-1 with powerfull GPU
For more information, see **https://fidle.cnrs.fr** : For more information, see **https://fidle.cnrs.fr** :
- **[Fidle site](https://fidle.cnrs.fr)** - **[Fidle site](https://fidle.cnrs.fr)**
- **[Presentation of the training](https://fidle.cnrs.fr/presentation)** - **[Presentation of the training](https://fidle.cnrs.fr/presentation)**
- **[Program 2021/2022](https://fidle.cnrs.fr/programme)** - **[Program 2021/2022](https://fidle.cnrs.fr/programme)**
- [Subscribe to the list](https://fidle.cnrs.fr/listeinfo), to stay informed ! - [Subscribe to the list](https://fidle.cnrs.fr/listeinfo), to stay informed !
- [Find us on youtube](https://fidle.cnrs.fr/youtube) - [Find us on youtube](https://fidle.cnrs.fr/youtube)
- [Corrected notebooks](https://fidle.cnrs.fr/done) - [Corrected notebooks](https://fidle.cnrs.fr/done)
For more information, you can contact us at : For more information, you can contact us at :
[<img width="200px" style="vertical-align:middle" src="fidle/img/00-Mail_contact.svg"></img>](#top) [<img width="200px" style="vertical-align:middle" src="fidle/img/00-Mail_contact.svg"></img>](#top)
Current Version : <!-- VERSION_BEGIN -->2.1b5<!-- VERSION_END --> Current Version : <!-- VERSION_BEGIN -->2.1b6<!-- VERSION_END -->
## Course materials ## Course materials
| | | | | | | | | |
|:--:|:--:|:--:|:--:| |:--:|:--:|:--:|:--:|
| **[<img width="50px" src="fidle/img/00-Fidle-pdf.svg"></img><br>Course slides](https://fidle.cnrs.fr/supports)**<br>The course in pdf format<br>(12 Mo)| **[<img width="50px" src="fidle/img/00-Notebooks.svg"></img><br>Notebooks](https://fidle.cnrs.fr/notebooks)**<br> &nbsp;&nbsp;&nbsp;&nbsp;Get a Zip or clone this repository &nbsp;&nbsp;&nbsp;&nbsp;<br>(40 Mo)| **[<img width="50px" src="fidle/img/00-Datasets-tar.svg"></img><br>Datasets](https://fidle.cnrs.fr/fidle-datasets.tar)**<br>All the needed datasets<br>(1.2 Go)|**[<img width="50px" src="fidle/img/00-Videos.svg"></img><br>Videos](https://fidle.cnrs.fr/youtube)**<br>&nbsp;&nbsp;&nbsp;&nbsp;Our Youtube channel&nbsp;&nbsp;&nbsp;&nbsp;<br>&nbsp;| | **[<img width="50px" src="fidle/img/00-Fidle-pdf.svg"></img><br>Course slides](https://fidle.cnrs.fr/supports)**<br>The course in pdf format<br>(12 Mo)| **[<img width="50px" src="fidle/img/00-Notebooks.svg"></img><br>Notebooks](https://fidle.cnrs.fr/notebooks)**<br> &nbsp;&nbsp;&nbsp;&nbsp;Get a Zip or clone this repository &nbsp;&nbsp;&nbsp;&nbsp;<br>(40 Mo)| **[<img width="50px" src="fidle/img/00-Datasets-tar.svg"></img><br>Datasets](https://fidle.cnrs.fr/fidle-datasets.tar)**<br>All the needed datasets<br>(1.2 Go)|**[<img width="50px" src="fidle/img/00-Videos.svg"></img><br>Videos](https://fidle.cnrs.fr/youtube)**<br>&nbsp;&nbsp;&nbsp;&nbsp;Our Youtube channel&nbsp;&nbsp;&nbsp;&nbsp;<br>&nbsp;|
Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)** these notebooks and datasets. Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)** these notebooks and datasets.
## Jupyter notebooks ## Jupyter notebooks
<!-- TOC_BEGIN --> <!-- TOC_BEGIN -->
<!-- Automatically generated on : 13/10/22 00:58:06 --> <!-- Automatically generated on : 13/10/22 10:19:33 -->
### Linear and logistic regression ### Linear and logistic regression
- **[LINR1](LinearReg/01-Linear-Regression.ipynb)** - [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb) - **[LINR1](LinearReg/01-Linear-Regression.ipynb)** - [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb)
Low-level implementation, using numpy, of a direct resolution for a linear regression Low-level implementation, using numpy, of a direct resolution for a linear regression
- **[GRAD1](LinearReg/02-Gradient-descent.ipynb)** - [Linear regression with gradient descent](LinearReg/02-Gradient-descent.ipynb) - **[GRAD1](LinearReg/02-Gradient-descent.ipynb)** - [Linear regression with gradient descent](LinearReg/02-Gradient-descent.ipynb)
Low level implementation of a solution by gradient descent. Basic and stochastic approach. Low level implementation of a solution by gradient descent. Basic and stochastic approach.
- **[POLR1](LinearReg/03-Polynomial-Regression.ipynb)** - [Complexity Syndrome](LinearReg/03-Polynomial-Regression.ipynb) - **[POLR1](LinearReg/03-Polynomial-Regression.ipynb)** - [Complexity Syndrome](LinearReg/03-Polynomial-Regression.ipynb)
Illustration of the problem of complexity with the polynomial regression Illustration of the problem of complexity with the polynomial regression
- **[LOGR1](LinearReg/04-Logistic-Regression.ipynb)** - [Logistic regression](LinearReg/04-Logistic-Regression.ipynb) - **[LOGR1](LinearReg/04-Logistic-Regression.ipynb)** - [Logistic regression](LinearReg/04-Logistic-Regression.ipynb)
Simple example of logistic regression with a sklearn solution Simple example of logistic regression with a sklearn solution
### Perceptron Model 1957 ### Perceptron Model 1957
- **[PER57](IRIS/01-Simple-Perceptron.ipynb)** - [Perceptron Model 1957](IRIS/01-Simple-Perceptron.ipynb) - **[PER57](IRIS/01-Simple-Perceptron.ipynb)** - [Perceptron Model 1957](IRIS/01-Simple-Perceptron.ipynb)
Example of use of a Perceptron, with sklearn and IRIS dataset of 1936 ! Example of use of a Perceptron, with sklearn and IRIS dataset of 1936 !
### Basic regression using DN ### Basic regression using DN
- **[BHPD1](BHPD/01-DNN-Regression.ipynb)** - [Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb) - **[BHPD1](BHPD/01-DNN-Regression.ipynb)** - [Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb)
Simple example of a regression with the dataset Boston Housing Prices Dataset (BHPD) Simple example of a regression with the dataset Boston Housing Prices Dataset (BHPD)
- **[BHPD2](BHPD/02-DNN-Regression-Premium.ipynb)** - [Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb) - **[BHPD2](BHPD/02-DNN-Regression-Premium.ipynb)** - [Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)
A more advanced implementation of the precedent example A more advanced implementation of the precedent example
### Basic classification using a DN ### Basic classification using a DN
- **[MNIST1](MNIST/01-DNN-MNIST.ipynb)** - [Simple classification with DNN](MNIST/01-DNN-MNIST.ipynb) - **[MNIST1](MNIST/01-DNN-MNIST.ipynb)** - [Simple classification with DNN](MNIST/01-DNN-MNIST.ipynb)
An example of classification using a dense neural network for the famous MNIST dataset An example of classification using a dense neural network for the famous MNIST dataset
- **[MNIST2](MNIST/02-CNN-MNIST.ipynb)** - [Simple classification with CNN](MNIST/02-CNN-MNIST.ipynb) - **[MNIST2](MNIST/02-CNN-MNIST.ipynb)** - [Simple classification with CNN](MNIST/02-CNN-MNIST.ipynb)
An example of classification using a convolutional neural network for the famous MNIST dataset An example of classification using a convolutional neural network for the famous MNIST dataset
### Images classification with Convolutional Neural Networks (CNN ### Images classification with Convolutional Neural Networks (CNN
- **[GTSRB1](GTSRB/01-Preparation-of-data.ipynb)** - [Dataset analysis and preparation](GTSRB/01-Preparation-of-data.ipynb) - **[GTSRB1](GTSRB/01-Preparation-of-data.ipynb)** - [Dataset analysis and preparation](GTSRB/01-Preparation-of-data.ipynb)
Episode 1 : Analysis of the GTSRB dataset and creation of an enhanced dataset Episode 1 : Analysis of the GTSRB dataset and creation of an enhanced dataset
- **[GTSRB2](GTSRB/02-First-convolutions.ipynb)** - [First convolutions](GTSRB/02-First-convolutions.ipynb) - **[GTSRB2](GTSRB/02-First-convolutions.ipynb)** - [First convolutions](GTSRB/02-First-convolutions.ipynb)
Episode 2 : First convolutions and first classification of our traffic signs Episode 2 : First convolutions and first classification of our traffic signs
- **[GTSRB3](GTSRB/03-Tracking-and-visualizing.ipynb)** - [Training monitoring](GTSRB/03-Tracking-and-visualizing.ipynb) - **[GTSRB3](GTSRB/03-Tracking-and-visualizing.ipynb)** - [Training monitoring](GTSRB/03-Tracking-and-visualizing.ipynb)
Episode 3 : Monitoring, analysis and check points during a training session Episode 3 : Monitoring, analysis and check points during a training session
- **[GTSRB4](GTSRB/04-Data-augmentation.ipynb)** - [Data augmentation ](GTSRB/04-Data-augmentation.ipynb) - **[GTSRB4](GTSRB/04-Data-augmentation.ipynb)** - [Data augmentation ](GTSRB/04-Data-augmentation.ipynb)
Episode 4 : Adding data by data augmentation when we lack it, to improve our results Episode 4 : Adding data by data augmentation when we lack it, to improve our results
- **[GTSRB5](GTSRB/05-Full-convolutions.ipynb)** - [Full convolutions](GTSRB/05-Full-convolutions.ipynb) - **[GTSRB5](GTSRB/05-Full-convolutions.ipynb)** - [Full convolutions](GTSRB/05-Full-convolutions.ipynb)
Episode 5 : A lot of models, a lot of datasets and a lot of results. Episode 5 : A lot of models, a lot of datasets and a lot of results.
- **[GTSRB6](GTSRB/06-Notebook-as-a-batch.ipynb)** - [Full convolutions as a batch](GTSRB/06-Notebook-as-a-batch.ipynb) - **[GTSRB6](GTSRB/06-Notebook-as-a-batch.ipynb)** - [Full convolutions as a batch](GTSRB/06-Notebook-as-a-batch.ipynb)
Episode 6 : To compute bigger, use your notebook in batch mode Episode 6 : To compute bigger, use your notebook in batch mode
- **[GTSRB7](GTSRB/07-Show-report.ipynb)** - [Batch reports](GTSRB/07-Show-report.ipynb) - **[GTSRB7](GTSRB/07-Show-report.ipynb)** - [Batch reports](GTSRB/07-Show-report.ipynb)
Episode 7 : Displaying our jobs report, and the winner is... Episode 7 : Displaying our jobs report, and the winner is...
- **[GTSRB10](GTSRB/batch_oar.sh)** - [OAR batch script submission](GTSRB/batch_oar.sh) - **[GTSRB10](GTSRB/batch_oar.sh)** - [OAR batch script submission](GTSRB/batch_oar.sh)
Bash script for an OAR batch submission of an ipython code Bash script for an OAR batch submission of an ipython code
- **[GTSRB11](GTSRB/batch_slurm.sh)** - [SLURM batch script](GTSRB/batch_slurm.sh) - **[GTSRB11](GTSRB/batch_slurm.sh)** - [SLURM batch script](GTSRB/batch_slurm.sh)
Bash script for a Slurm batch submission of an ipython code Bash script for a Slurm batch submission of an ipython code
### Sentiment analysis with word embeddin ### Sentiment analysis with word embeddin
- **[IMDB1](IMDB/01-One-hot-encoding.ipynb)** - [Sentiment analysis with hot-one encoding](IMDB/01-One-hot-encoding.ipynb) - **[IMDB1](IMDB/01-One-hot-encoding.ipynb)** - [Sentiment analysis with hot-one encoding](IMDB/01-One-hot-encoding.ipynb)
A basic example of sentiment analysis with sparse encoding, using a dataset from Internet Movie Database (IMDB) A basic example of sentiment analysis with sparse encoding, using a dataset from Internet Movie Database (IMDB)
- **[IMDB2](IMDB/02-Keras-embedding.ipynb)** - [Sentiment analysis with text embedding](IMDB/02-Keras-embedding.ipynb) - **[IMDB2](IMDB/02-Keras-embedding.ipynb)** - [Sentiment analysis with text embedding](IMDB/02-Keras-embedding.ipynb)
A very classical example of word embedding with a dataset from Internet Movie Database (IMDB) A very classical example of word embedding with a dataset from Internet Movie Database (IMDB)
- **[IMDB3](IMDB/03-Prediction.ipynb)** - [Reload and reuse a saved model](IMDB/03-Prediction.ipynb) - **[IMDB3](IMDB/03-Prediction.ipynb)** - [Reload and reuse a saved model](IMDB/03-Prediction.ipynb)
Retrieving a saved model to perform a sentiment analysis (movie review) Retrieving a saved model to perform a sentiment analysis (movie review)
- **[IMDB4](IMDB/04-Show-vectors.ipynb)** - [Reload embedded vectors](IMDB/04-Show-vectors.ipynb) - **[IMDB4](IMDB/04-Show-vectors.ipynb)** - [Reload embedded vectors](IMDB/04-Show-vectors.ipynb)
Retrieving embedded vectors from our trained model Retrieving embedded vectors from our trained model
- **[IMDB5](IMDB/05-LSTM-Keras.ipynb)** - [Sentiment analysis with a RNN network](IMDB/05-LSTM-Keras.ipynb) - **[IMDB5](IMDB/05-LSTM-Keras.ipynb)** - [Sentiment analysis with a RNN network](IMDB/05-LSTM-Keras.ipynb)
Still the same problem, but with a network combining embedding and RNN Still the same problem, but with a network combining embedding and RNN
### Time series with Recurrent Neural Network (RNN ### Time series with Recurrent Neural Network (RNN
- **[LADYB1](SYNOP/LADYB1-Ladybug.ipynb)** - [Prediction of a 2D trajectory via RNN](SYNOP/LADYB1-Ladybug.ipynb) - **[LADYB1](SYNOP/LADYB1-Ladybug.ipynb)** - [Prediction of a 2D trajectory via RNN](SYNOP/LADYB1-Ladybug.ipynb)
Artificial dataset generation and prediction attempt via a recurrent network Artificial dataset generation and prediction attempt via a recurrent network
- **[SYNOP1](SYNOP/SYNOP1-Preparation-of-data.ipynb)** - [Preparation of data](SYNOP/SYNOP1-Preparation-of-data.ipynb) - **[SYNOP1](SYNOP/SYNOP1-Preparation-of-data.ipynb)** - [Preparation of data](SYNOP/SYNOP1-Preparation-of-data.ipynb)
Episode 1 : Data analysis and preparation of a usuable meteorological dataset (SYNOP) Episode 1 : Data analysis and preparation of a usuable meteorological dataset (SYNOP)
- **[SYNOP2](SYNOP/SYNOP2-First-predictions.ipynb)** - [First predictions at 3h](SYNOP/SYNOP2-First-predictions.ipynb) - **[SYNOP2](SYNOP/SYNOP2-First-predictions.ipynb)** - [First predictions at 3h](SYNOP/SYNOP2-First-predictions.ipynb)
Episode 2 : RNN training session for weather prediction attempt at 3h Episode 2 : RNN training session for weather prediction attempt at 3h
- **[SYNOP3](SYNOP/SYNOP3-12h-predictions.ipynb)** - [12h predictions](SYNOP/SYNOP3-12h-predictions.ipynb) - **[SYNOP3](SYNOP/SYNOP3-12h-predictions.ipynb)** - [12h predictions](SYNOP/SYNOP3-12h-predictions.ipynb)
Episode 3: Attempt to predict in a more longer term Episode 3: Attempt to predict in a more longer term
### Sentiment analysis with transformer ### Sentiment analysis with transformer
- **[TRANS1](Transformers/01-Distilbert.ipynb)** - [IMDB, Sentiment analysis with Transformers ](Transformers/01-Distilbert.ipynb) - **[TRANS1](Transformers/01-Distilbert.ipynb)** - [IMDB, Sentiment analysis with Transformers ](Transformers/01-Distilbert.ipynb)
Using a Tranformer to perform a sentiment analysis (IMDB) - Jean Zay version Using a Tranformer to perform a sentiment analysis (IMDB) - Jean Zay version
- **[TRANS2](Transformers/02-distilbert_colab.ipynb)** - [IMDB, Sentiment analysis with Transformers ](Transformers/02-distilbert_colab.ipynb) - **[TRANS2](Transformers/02-distilbert_colab.ipynb)** - [IMDB, Sentiment analysis with Transformers ](Transformers/02-distilbert_colab.ipynb)
Using a Tranformer to perform a sentiment analysis (IMDB) - Colab version Using a Tranformer to perform a sentiment analysis (IMDB) - Colab version
### Unsupervised learning with an autoencoder neural network (AE ### Unsupervised learning with an autoencoder neural network (AE
- **[AE1](AE/01-Prepare-MNIST-dataset.ipynb)** - [Prepare a noisy MNIST dataset](AE/01-Prepare-MNIST-dataset.ipynb) - **[AE1](AE/01-Prepare-MNIST-dataset.ipynb)** - [Prepare a noisy MNIST dataset](AE/01-Prepare-MNIST-dataset.ipynb)
Episode 1: Preparation of a noisy MNIST dataset Episode 1: Preparation of a noisy MNIST dataset
- **[AE2](AE/02-AE-with-MNIST.ipynb)** - [Building and training an AE denoiser model](AE/02-AE-with-MNIST.ipynb) - **[AE2](AE/02-AE-with-MNIST.ipynb)** - [Building and training an AE denoiser model](AE/02-AE-with-MNIST.ipynb)
Episode 1 : Construction of a denoising autoencoder and training of it with a noisy MNIST dataset. Episode 1 : Construction of a denoising autoencoder and training of it with a noisy MNIST dataset.
- **[AE3](AE/03-AE-with-MNIST-post.ipynb)** - [Playing with our denoiser model](AE/03-AE-with-MNIST-post.ipynb) - **[AE3](AE/03-AE-with-MNIST-post.ipynb)** - [Playing with our denoiser model](AE/03-AE-with-MNIST-post.ipynb)
Episode 2 : Using the previously trained autoencoder to denoise data Episode 2 : Using the previously trained autoencoder to denoise data
- **[AE4](AE/04-ExtAE-with-MNIST.ipynb)** - [Denoiser and classifier model](AE/04-ExtAE-with-MNIST.ipynb) - **[AE4](AE/04-ExtAE-with-MNIST.ipynb)** - [Denoiser and classifier model](AE/04-ExtAE-with-MNIST.ipynb)
Episode 4 : Construction of a denoiser and classifier model Episode 4 : Construction of a denoiser and classifier model
- **[AE5](AE/05-ExtAE-with-MNIST.ipynb)** - [Advanced denoiser and classifier model](AE/05-ExtAE-with-MNIST.ipynb) - **[AE5](AE/05-ExtAE-with-MNIST.ipynb)** - [Advanced denoiser and classifier model](AE/05-ExtAE-with-MNIST.ipynb)
Episode 5 : Construction of an advanced denoiser and classifier model Episode 5 : Construction of an advanced denoiser and classifier model
### Generative network with Variational Autoencoder (VAE ### Generative network with Variational Autoencoder (VAE
- **[VAE1](VAE/01-VAE-with-MNIST.ipynb)** - [First VAE, using functional API (MNIST dataset)](VAE/01-VAE-with-MNIST.ipynb) - **[VAE1](VAE/01-VAE-with-MNIST.ipynb)** - [First VAE, using functional API (MNIST dataset)](VAE/01-VAE-with-MNIST.ipynb)
Construction and training of a VAE, using functional APPI, with a latent space of small dimension. Construction and training of a VAE, using functional APPI, with a latent space of small dimension.
- **[VAE2](VAE/02-VAE-with-MNIST.ipynb)** - [VAE, using a custom model class (MNIST dataset)](VAE/02-VAE-with-MNIST.ipynb) - **[VAE2](VAE/02-VAE-with-MNIST.ipynb)** - [VAE, using a custom model class (MNIST dataset)](VAE/02-VAE-with-MNIST.ipynb)
Construction and training of a VAE, using model subclass, with a latent space of small dimension. Construction and training of a VAE, using model subclass, with a latent space of small dimension.
- **[VAE3](VAE/03-VAE-with-MNIST-post.ipynb)** - [Analysis of the VAE's latent space of MNIST dataset](VAE/03-VAE-with-MNIST-post.ipynb) - **[VAE3](VAE/03-VAE-with-MNIST-post.ipynb)** - [Analysis of the VAE's latent space of MNIST dataset](VAE/03-VAE-with-MNIST-post.ipynb)
Visualization and analysis of the VAE's latent space of the dataset MNIST Visualization and analysis of the VAE's latent space of the dataset MNIST
- **[VAE5](VAE/05-About-CelebA.ipynb)** - [Another game play : About the CelebA dataset](VAE/05-About-CelebA.ipynb) - **[VAE5](VAE/05-About-CelebA.ipynb)** - [Another game play : About the CelebA dataset](VAE/05-About-CelebA.ipynb)
Episode 1 : Presentation of the CelebA dataset and problems related to its size Episode 1 : Presentation of the CelebA dataset and problems related to its size
- **[VAE6](VAE/06-Prepare-CelebA-datasets.ipynb)** - [Generation of a clustered dataset](VAE/06-Prepare-CelebA-datasets.ipynb) - **[VAE6](VAE/06-Prepare-CelebA-datasets.ipynb)** - [Generation of a clustered dataset](VAE/06-Prepare-CelebA-datasets.ipynb)
Episode 2 : Analysis of the CelebA dataset and creation of an clustered and usable dataset Episode 2 : Analysis of the CelebA dataset and creation of an clustered and usable dataset
- **[VAE7](VAE/07-Check-CelebA.ipynb)** - [Checking the clustered dataset](VAE/07-Check-CelebA.ipynb) - **[VAE7](VAE/07-Check-CelebA.ipynb)** - [Checking the clustered dataset](VAE/07-Check-CelebA.ipynb)
Episode : 3 Clustered dataset verification and testing of our datagenerator Episode : 3 Clustered dataset verification and testing of our datagenerator
- **[VAE8](VAE/08-VAE-with-CelebA-128x128.ipynb)** - [Training session for our VAE with 128x128 images](VAE/08-VAE-with-CelebA-128x128.ipynb) - **[VAE8](VAE/08-VAE-with-CelebA-128x128.ipynb)** - [Training session for our VAE with 128x128 images](VAE/08-VAE-with-CelebA-128x128.ipynb)
Episode 4 : Training with our clustered datasets in notebook or batch mode Episode 4 : Training with our clustered datasets in notebook or batch mode
- **[VAE9](VAE/09-VAE-with-CelebA-192x160.ipynb)** - [Training session for our VAE with 192x160 images](VAE/09-VAE-with-CelebA-192x160.ipynb) - **[VAE9](VAE/09-VAE-with-CelebA-192x160.ipynb)** - [Training session for our VAE with 192x160 images](VAE/09-VAE-with-CelebA-192x160.ipynb)
Episode 4 : Training with our clustered datasets in notebook or batch mode Episode 4 : Training with our clustered datasets in notebook or batch mode
- **[VAE10](VAE/10-VAE-with-CelebA-post.ipynb)** - [Data generation from latent space](VAE/10-VAE-with-CelebA-post.ipynb) - **[VAE10](VAE/10-VAE-with-CelebA-post.ipynb)** - [Data generation from latent space](VAE/10-VAE-with-CelebA-post.ipynb)
Episode 5 : Exploring latent space to generate new data Episode 5 : Exploring latent space to generate new data
- **[VAE10](VAE/batch_slurm.sh)** - [SLURM batch script](VAE/batch_slurm.sh) - **[VAE10](VAE/batch_slurm.sh)** - [SLURM batch script](VAE/batch_slurm.sh)
Bash script for SLURM batch submission of VAE8 notebooks Bash script for SLURM batch submission of VAE8 notebooks
### Generative Adversarial Networks (GANs ### Generative Adversarial Networks (GANs
- **[SHEEP1](DCGAN/01-DCGAN-Draw-me-a-sheep.ipynb)** - [A first DCGAN to Draw a Sheep](DCGAN/01-DCGAN-Draw-me-a-sheep.ipynb) - **[SHEEP1](DCGAN/01-DCGAN-Draw-me-a-sheep.ipynb)** - [A first DCGAN to Draw a Sheep](DCGAN/01-DCGAN-Draw-me-a-sheep.ipynb)
Episode 1 : Draw me a sheep, revisited with a DCGAN Episode 1 : Draw me a sheep, revisited with a DCGAN
- **[SHEEP2](DCGAN/02-WGANGP-Draw-me-a-sheep.ipynb)** - [A WGAN-GP to Draw a Sheep](DCGAN/02-WGANGP-Draw-me-a-sheep.ipynb) - **[SHEEP2](DCGAN/02-WGANGP-Draw-me-a-sheep.ipynb)** - [A WGAN-GP to Draw a Sheep](DCGAN/02-WGANGP-Draw-me-a-sheep.ipynb)
Episode 2 : Draw me a sheep, revisited with a WGAN-GP Episode 2 : Draw me a sheep, revisited with a WGAN-GP
### Deep Reinforcement Learning (DRL ### Deep Reinforcement Learning (DRL
- **[DRL1](DRL/FIDLE_DQNfromScratch.ipynb)** - [Solving CartPole with DQN](DRL/FIDLE_DQNfromScratch.ipynb) - **[DRL1](DRL/FIDLE_DQNfromScratch.ipynb)** - [Solving CartPole with DQN](DRL/FIDLE_DQNfromScratch.ipynb)
Using a a Deep Q-Network to play CartPole - an inverted pendulum problem (PyTorch) Using a a Deep Q-Network to play CartPole - an inverted pendulum problem (PyTorch)
- **[DRL2](DRL/FIDLE_rl_baselines_zoo.ipynb)** - [RL Baselines3 Zoo: Training in Colab](DRL/FIDLE_rl_baselines_zoo.ipynb) - **[DRL2](DRL/FIDLE_rl_baselines_zoo.ipynb)** - [RL Baselines3 Zoo: Training in Colab](DRL/FIDLE_rl_baselines_zoo.ipynb)
Demo of Stable baseline3 with Colab Demo of Stable baseline3 with Colab
### Miscellaneous ### Miscellaneous
- **[ACTF1](Misc/Activation-Functions.ipynb)** - [Activation functions](Misc/Activation-Functions.ipynb) - **[ACTF1](Misc/Activation-Functions.ipynb)** - [Activation functions](Misc/Activation-Functions.ipynb)
Some activation functions, with their derivatives. Some activation functions, with their derivatives.
- **[NP1](Misc/Numpy.ipynb)** - [A short introduction to Numpy](Misc/Numpy.ipynb) - **[NP1](Misc/Numpy.ipynb)** - [A short introduction to Numpy](Misc/Numpy.ipynb)
Numpy is an essential tool for the Scientific Python. Numpy is an essential tool for the Scientific Python.
- **[SCRATCH1](Misc/Scratchbook.ipynb)** - [Scratchbook](Misc/Scratchbook.ipynb) - **[SCRATCH1](Misc/Scratchbook.ipynb)** - [Scratchbook](Misc/Scratchbook.ipynb)
A scratchbook for small examples A scratchbook for small examples
- **[TSB1](Misc/Using-Tensorboard.ipynb)** - [Tensorboard with/from Jupyter ](Misc/Using-Tensorboard.ipynb) - **[TSB1](Misc/Using-Tensorboard.ipynb)** - [Tensorboard with/from Jupyter ](Misc/Using-Tensorboard.ipynb)
4 ways to use Tensorboard from the Jupyter environment 4 ways to use Tensorboard from the Jupyter environment
<!-- TOC_END --> <!-- TOC_END -->
## Installation ## Installation
Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)** these notebooks and datasets. Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)** these notebooks and datasets.
## Licence ## Licence
[<img width="100px" src="fidle/img/00-fidle-CC BY-NC-SA.svg"></img>](https://creativecommons.org/licenses/by-nc-sa/4.0/) [<img width="100px" src="fidle/img/00-fidle-CC BY-NC-SA.svg"></img>](https://creativecommons.org/licenses/by-nc-sa/4.0/)
\[en\] Attribution - NonCommercial - ShareAlike 4.0 International (CC BY-NC-SA 4.0) \[en\] Attribution - NonCommercial - ShareAlike 4.0 International (CC BY-NC-SA 4.0)
\[Fr\] Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International \[Fr\] Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
See [License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). See [License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
See [Disclaimer](https://creativecommons.org/licenses/by-nc-sa/4.0/#). See [Disclaimer](https://creativecommons.org/licenses/by-nc-sa/4.0/#).
---- ----
[<img width="80px" src="fidle/img/00-Fidle-logo-01.svg"></img>](#top) [<img width="80px" src="fidle/img/00-Fidle-logo-01.svg"></img>](#top)
......
...@@ -31,7 +31,7 @@ For more information, see **https://fidle.cnrs.fr** : ...@@ -31,7 +31,7 @@ For more information, see **https://fidle.cnrs.fr** :
For more information, you can contact us at : For more information, you can contact us at :
[<img width="200px" style="vertical-align:middle" src="fidle/img/00-Mail_contact.svg"></img>](#top) [<img width="200px" style="vertical-align:middle" src="fidle/img/00-Mail_contact.svg"></img>](#top)
Current Version : <!-- VERSION_BEGIN -->2.1b5<!-- VERSION_END --> Current Version : <!-- VERSION_BEGIN -->2.1b6<!-- VERSION_END -->
## Course materials ## Course materials
...@@ -46,7 +46,7 @@ Have a look about **[How to get and install](https://fidle.cnrs.fr/installation) ...@@ -46,7 +46,7 @@ Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)
## Jupyter notebooks ## Jupyter notebooks
<!-- TOC_BEGIN --> <!-- TOC_BEGIN -->
<!-- Automatically generated on : 13/10/22 00:58:06 --> <!-- Automatically generated on : 13/10/22 10:19:33 -->
### Linear and logistic regression ### Linear and logistic regression
- **[LINR1](LinearReg/01-Linear-Regression.ipynb)** - [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb) - **[LINR1](LinearReg/01-Linear-Regression.ipynb)** - [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb)
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img> <img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img>
# <!-- TITLE --> [VAE2] - VAE, using a custom model class (MNIST dataset) # <!-- TITLE --> [VAE2] - VAE, using a custom model class (MNIST dataset)
<!-- DESC --> Construction and training of a VAE, using model subclass, with a latent space of small dimension. <!-- DESC --> Construction and training of a VAE, using model subclass, with a latent space of small dimension.
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) --> <!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives : ## Objectives :
- Understanding and implementing a **variational autoencoder** neurals network (VAE) - Understanding and implementing a **variational autoencoder** neurals network (VAE)
- Understanding a still more **advanced programming model**, using a **custom model** - Understanding a still more **advanced programming model**, using a **custom model**
The calculation needs being important, it is preferable to use a very simple dataset such as MNIST to start with. The calculation needs being important, it is preferable to use a very simple dataset such as MNIST to start with.
...MNIST with a small scale if you haven't a GPU ;-) ...MNIST with a small scale if you haven't a GPU ;-)
## What we're going to do : ## What we're going to do :
- Defining a VAE model - Defining a VAE model
- Build the model - Build the model
- Train it - Train it
- Have a look on the train process - Have a look on the train process
## Acknowledgements : ## Acknowledgements :
Thanks to **François Chollet** who is at the base of this example (and the creator of Keras !!). Thanks to **François Chollet** who is at the base of this example (and the creator of Keras !!).
See : https://keras.io/examples/generative/vae See : https://keras.io/examples/generative/vae
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 1 - Init python stuff ## Step 1 - Init python stuff
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import scipy.stats import scipy.stats
import sys import sys
import tensorflow as tf import tensorflow as tf
from tensorflow import keras from tensorflow import keras
from tensorflow.keras import layers from tensorflow.keras import layers
from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.callbacks import TensorBoard
from modules.models import VAE from modules.models import VAE
from modules.layers import SamplingLayer from modules.layers import SamplingLayer
from modules.callbacks import ImagesCallback, BestModelCallback from modules.callbacks import ImagesCallback, BestModelCallback
from modules.datagen import MNIST from modules.datagen import MNIST
import fidle import fidle
# Init Fidle environment # Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('VAE2') run_id, run_dir, datasets_dir = fidle.init('VAE2')
VAE.about() VAE.about()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 2 - Parameters ## Step 2 - Parameters
`scale` : with scale=1, we need 1'30s on a GPU V100 ...and >20' on a CPU ! `scale` : with scale=1, we need 1'30s on a GPU V100 ...and >20' on a CPU !
`latent_dim` : 2 dimensions is small, but usefull to draw ! `latent_dim` : 2 dimensions is small, but usefull to draw !
`fit_verbosity`: Verbosity of training progress bar: 0=silent, 1=progress bar, 2=One line `fit_verbosity`: Verbosity of training progress bar: 0=silent, 1=progress bar, 2=One line
`loss_weights` : Our **loss function** is the weighted sum of two loss: `loss_weights` : Our **loss function** is the weighted sum of two loss:
- `r_loss` which measures the loss during reconstruction. - `r_loss` which measures the loss during reconstruction.
- `kl_loss` which measures the dispersion. - `kl_loss` which measures the dispersion.
The weights are defined by: `loss_weights=[k1,k2]` where : `total_loss = k1*r_loss + k2*kl_loss` The weights are defined by: `loss_weights=[k1,k2]` where : `total_loss = k1*r_loss + k2*kl_loss`
In practice, a value of \[1,.01\] gives good results here. In practice, a value of \[1,.01\] gives good results here.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
latent_dim = 6 latent_dim = 6
loss_weights = [1,.001] # [1, .001] give good results loss_weights = [1,.001] # [1, .001] give good results
scale = .2 scale = .2
seed = 123 seed = 123
batch_size = 64 batch_size = 64
epochs = 5 epochs = 5
fit_verbosity = 1 fit_verbosity = 1
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fidle.override('latent_dim', 'loss_weights', 'scale', 'seed', 'batch_size', 'epochs', 'fit_verbosity') fidle.override('latent_dim', 'loss_weights', 'scale', 'seed', 'batch_size', 'epochs', 'fit_verbosity')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 3 - Prepare data ## Step 3 - Prepare data
`MNIST.get_data()` return : `x_train,y_train, x_test,y_test`, \ `MNIST.get_data()` return : `x_train,y_train, x_test,y_test`, \
but we only need x_train for our training. but we only need x_train for our training.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 ) x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 )
fidle.scrawler.images(x_data[:20], None, indices='all', columns=10, x_size=1,y_size=1,y_padding=0, save_as='01-original') fidle.scrawler.images(x_data[:20], None, indices='all', columns=10, x_size=1,y_size=1,y_padding=0, save_as='01-original')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 4 - Build model ## Step 4 - Build model
In this example, we will use a **custom model**. In this example, we will use a **custom model**.
For this, we will use : For this, we will use :
- `SamplingLayer`, which generates a vector z from the parameters z_mean and z_log_var - See : [SamplingLayer.py](./modules/layers/SamplingLayer.py) - `SamplingLayer`, which generates a vector z from the parameters z_mean and z_log_var - See : [SamplingLayer.py](./modules/layers/SamplingLayer.py)
- `VAE`, a custom model with a specific train_step - See : [VAE.py](./modules/models/VAE.py) - `VAE`, a custom model with a specific train_step - See : [VAE.py](./modules/models/VAE.py)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Encoder #### Encoder
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
inputs = keras.Input(shape=(28, 28, 1)) inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(32, 3, strides=1, padding="same", activation="relu")(inputs) x = layers.Conv2D(32, 3, strides=1, padding="same", activation="relu")(inputs)
x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x) x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x) x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(64, 3, strides=1, padding="same", activation="relu")(x) x = layers.Conv2D(64, 3, strides=1, padding="same", activation="relu")(x)
x = layers.Flatten()(x) x = layers.Flatten()(x)
x = layers.Dense(16, activation="relu")(x) x = layers.Dense(16, activation="relu")(x)
z_mean = layers.Dense(latent_dim, name="z_mean")(x) z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x) z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = SamplingLayer()([z_mean, z_log_var]) z = SamplingLayer()([z_mean, z_log_var])
encoder = keras.Model(inputs, [z_mean, z_log_var, z], name="encoder") encoder = keras.Model(inputs, [z_mean, z_log_var, z], name="encoder")
encoder.compile() encoder.compile()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Decoder #### Decoder
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
inputs = keras.Input(shape=(latent_dim,)) inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(7 * 7 * 64, activation="relu")(inputs) x = layers.Dense(7 * 7 * 64, activation="relu")(inputs)
x = layers.Reshape((7, 7, 64))(x) x = layers.Reshape((7, 7, 64))(x)
x = layers.Conv2DTranspose(64, 3, strides=1, padding="same", activation="relu")(x) x = layers.Conv2DTranspose(64, 3, strides=1, padding="same", activation="relu")(x)
x = layers.Conv2DTranspose(64, 3, strides=2, padding="same", activation="relu")(x) x = layers.Conv2DTranspose(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, strides=2, padding="same", activation="relu")(x) x = layers.Conv2DTranspose(32, 3, strides=2, padding="same", activation="relu")(x)
outputs = layers.Conv2DTranspose(1, 3, padding="same", activation="sigmoid")(x) outputs = layers.Conv2DTranspose(1, 3, padding="same", activation="sigmoid")(x)
decoder = keras.Model(inputs, outputs, name="decoder") decoder = keras.Model(inputs, outputs, name="decoder")
decoder.compile() decoder.compile()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### VAE #### VAE
`VAE` is a custom model with a specific train_step - See : [VAE.py](./modules/models/VAE.py) `VAE` is a custom model with a specific train_step - See : [VAE.py](./modules/models/VAE.py)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vae = VAE(encoder, decoder, loss_weights) vae = VAE(encoder, decoder, loss_weights)
vae.compile(optimizer='adam') vae.compile(optimizer='adam')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 5 - Train ## Step 5 - Train
### 5.1 - Using two nice custom callbacks :-) ### 5.1 - Using two nice custom callbacks :-)
Two custom callbacks are used: Two custom callbacks are used:
- `ImagesCallback` : qui va sauvegarder des images durant l'apprentissage - See [ImagesCallback.py](./modules/callbacks/ImagesCallback.py) - `ImagesCallback` : qui va sauvegarder des images durant l'apprentissage - See [ImagesCallback.py](./modules/callbacks/ImagesCallback.py)
- `BestModelCallback` : qui sauvegardera le meilleur model - See [BestModelCallback.py](./modules/callbacks/BestModelCallback.py) - `BestModelCallback` : qui sauvegardera le meilleur model - See [BestModelCallback.py](./modules/callbacks/BestModelCallback.py)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
callback_images = ImagesCallback(x=x_data, z_dim=latent_dim, nb_images=5, from_z=True, from_random=True, run_dir=run_dir) callback_images = ImagesCallback(x=x_data, z_dim=latent_dim, nb_images=5, from_z=True, from_random=True, run_dir=run_dir)
callback_bestmodel = BestModelCallback( run_dir + '/models/best_model.h5' ) callback_bestmodel = BestModelCallback( run_dir + '/models/best_model.h5' )
callback_tensorboard = TensorBoard(log_dir=run_dir + '/logs', histogram_freq=1) callback_tensorboard = TensorBoard(log_dir=run_dir + '/logs', histogram_freq=1)
callbacks_list = [callback_images, callback_bestmodel] callbacks_list = [callback_images, callback_bestmodel]
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 5.2 - Let's train ! ### 5.2 - Let's train !
With `scale=1`, need 1'15 on a GPU (V100 at IDRIS) ...or 20' on a CPU With `scale=1`, need 1'15 on a GPU (V100 at IDRIS) ...or 20' on a CPU
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
chrono=fidle.Chrono() chrono=fidle.Chrono()
chrono.start() chrono.start()
history = vae.fit(x_data, epochs=epochs, batch_size=batch_size, callbacks=callbacks_list, verbose=fit_verbosity) history = vae.fit(x_data, epochs=epochs, batch_size=batch_size, callbacks=callbacks_list, verbose=fit_verbosity)
chrono.show() chrono.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 6 - Training review ## Step 6 - Training review
### 6.1 - History ### 6.1 - History
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fidle.scrawler.history(history, plot={"Loss":['loss','r_loss', 'kl_loss']}, save_as='history') fidle.scrawler.history(history, plot={"Loss":['loss','r_loss', 'kl_loss']}, save_as='history')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 6.2 - Reconstruction during training ### 6.2 - Reconstruction during training
At the end of each epoch, our callback saved some reconstructed images. At the end of each epoch, our callback saved some reconstructed images.
Where : Where :
Original image -> encoder -> z -> decoder -> Reconstructed image Original image -> encoder -> z -> decoder -> Reconstructed image
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
images_z, images_r = callback_images.get_images( range(0,epochs,2) ) images_z, images_r = callback_images.get_images( range(0,epochs,2) )
fidle.utils.subtitle('Original images :') fidle.utils.subtitle('Original images :')
fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as='02-original') fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as='02-original')
fidle.utils.subtitle('Encoded/decoded images') fidle.utils.subtitle('Encoded/decoded images')
fidle.scrawler.images(images_z, None, indices='all', columns=5, x_size=2,y_size=2, save_as='03-reconstruct') fidle.scrawler.images(images_z, None, indices='all', columns=5, x_size=2,y_size=2, save_as='03-reconstruct')
fidle.utils.subtitle('Original images :') fidle.utils.subtitle('Original images :')
fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as=None) fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as=None)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 6.3 - Generation (latent -> decoder) during training ### 6.3 - Generation (latent -> decoder) during training
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fidle.utils.subtitle('Generated images from latent space') fidle.utils.subtitle('Generated images from latent space')
fidle.scrawler.images(images_r, None, indices='all', columns=5, x_size=2,y_size=2, save_as='04-encoded') fidle.scrawler.images(images_r, None, indices='all', columns=5, x_size=2,y_size=2, save_as='04-encoded')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 7 - Model evaluation ## Step 7 - Model evaluation
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 7.1 - Reload best model ### 7.1 - Reload best model
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vae=VAE() vae=VAE()
vae.reload(f'{run_dir}/models/best_model') vae.reload(f'{run_dir}/models/best_model')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 7.2 - Image reconstruction ### 7.2 - Image reconstruction
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# ---- Select few images # ---- Select few images
x_show = fidle.utils.pick_dataset(x_data, n=10) x_show = fidle.utils.pick_dataset(x_data, n=10)
# ---- Get latent points and reconstructed images # ---- Get latent points and reconstructed images
z_mean, z_var, z = vae.encoder.predict(x_show) z_mean, z_var, z = vae.encoder.predict(x_show)
x_reconst = vae.decoder.predict(z) x_reconst = vae.decoder.predict(z)
# ---- Show it # ---- Show it
labels=[ str(np.round(z[i],1)) for i in range(10) ] labels=[ str(np.round(z[i],1)) for i in range(10) ]
fidle.scrawler.images(x_show, None, indices='all', columns=10, x_size=2,y_size=2, save_as='05-original') fidle.scrawler.images(x_show, None, indices='all', columns=10, x_size=2,y_size=2, save_as='05-original')
fidle.scrawler.images(x_reconst, labels , indices='all', columns=10, x_size=2,y_size=2, save_as='06-reconstruct') fidle.scrawler.images(x_reconst, None, indices='all', columns=10, x_size=2,y_size=2, save_as='06-reconstruct')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 7.3 - Visualization of the latent space ### 7.3 - Visualization of the latent space
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
n_show = int(20000*scale) n_show = int(20000*scale)
# ---- Select images # ---- Select images
x_show, y_show = fidle.utils.pick_dataset(x_data,y_data, n=n_show) x_show, y_show = fidle.utils.pick_dataset(x_data,y_data, n=n_show)
# ---- Get latent points # ---- Get latent points
z_mean, z_var, z = vae.encoder.predict(x_show) z_mean, z_var, z = vae.encoder.predict(x_show)
# ---- Show them # ---- Show them
fig = plt.figure(figsize=(14, 10)) fig = plt.figure(figsize=(14, 10))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=30) plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=30)
plt.colorbar() plt.colorbar()
fidle.scrawler.save_fig('07-Latent-space') fidle.scrawler.save_fig('07-Latent-space')
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 7.4 - Generative latent space ### 7.4 - Generative latent space
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
if latent_dim>2: if latent_dim>2:
print('Sorry, This part can only work if the latent space is of dimension 2') print('Sorry, This part can only work if the latent space is of dimension 2')
else: else:
grid_size = 18 grid_size = 18
grid_scale = 1 grid_scale = 1
# ---- Draw a ppf grid # ---- Draw a ppf grid
grid=[] grid=[]
for y in scipy.stats.norm.ppf(np.linspace(0.99, 0.01, grid_size),scale=grid_scale): for y in scipy.stats.norm.ppf(np.linspace(0.99, 0.01, grid_size),scale=grid_scale):
for x in scipy.stats.norm.ppf(np.linspace(0.01, 0.99, grid_size),scale=grid_scale): for x in scipy.stats.norm.ppf(np.linspace(0.01, 0.99, grid_size),scale=grid_scale):
grid.append( (x,y) ) grid.append( (x,y) )
grid=np.array(grid) grid=np.array(grid)
# ---- Draw latentspoints and grid # ---- Draw latentspoints and grid
fig = plt.figure(figsize=(10, 8)) fig = plt.figure(figsize=(10, 8))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=20) plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=20)
plt.scatter(grid[:, 0] , grid[:, 1], c = 'black', s=60, linewidth=2, marker='+', alpha=1) plt.scatter(grid[:, 0] , grid[:, 1], c = 'black', s=60, linewidth=2, marker='+', alpha=1)
fidle.scrawler.save_fig('08-Latent-grid') fidle.scrawler.save_fig('08-Latent-grid')
plt.show() plt.show()
# ---- Plot grid corresponding images # ---- Plot grid corresponding images
x_reconst = vae.decoder.predict([grid]) x_reconst = vae.decoder.predict([grid])
fidle.scrawler.images(x_reconst, indices='all', columns=grid_size, x_size=0.5,y_size=0.5, y_padding=0,spines_alpha=0.1, save_as='09-Latent-morphing') fidle.scrawler.images(x_reconst, indices='all', columns=grid_size, x_size=0.5,y_size=0.5, y_padding=0,spines_alpha=0.1, save_as='09-Latent-morphing')
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fidle.end() fidle.end()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
--- ---
<img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img> <img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img>
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img> <img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img>
# <!-- TITLE --> [VAE3] - Analysis of the VAE's latent space of MNIST dataset # <!-- TITLE --> [VAE3] - Analysis of the VAE's latent space of MNIST dataset
<!-- DESC --> Visualization and analysis of the VAE's latent space of the dataset MNIST <!-- DESC --> Visualization and analysis of the VAE's latent space of the dataset MNIST
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) --> <!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives : ## Objectives :
- First data generation from **latent space** - First data generation from **latent space**
- Understanding of underlying principles - Understanding of underlying principles
- Model management - Model management
Here, we don't consume data anymore, but we generate them ! ;-) Here, we don't consume data anymore, but we generate them ! ;-)
## What we're going to do : ## What we're going to do :
- Load a saved model - Load a saved model
- Reconstruct some images - Reconstruct some images
- Latent space visualization - Latent space visualization
- Matrix of generated images - Matrix of generated images
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 1 - Init python stuff ## Step 1 - Init python stuff
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 1.1 - Init python ### 1.1 - Init python
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import numpy as np import numpy as np
import tensorflow as tf import tensorflow as tf
from tensorflow import keras from tensorflow import keras
from modules.models import VAE from modules.models import VAE
from modules.datagen import MNIST from modules.datagen import MNIST
import scipy.stats import scipy.stats
import matplotlib import matplotlib
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
from barviz import Simplex from barviz import Simplex
from barviz import Collection from barviz import Collection
import sys import sys
import fidle import fidle
# Init Fidle environment # Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('VAE3') run_id, run_dir, datasets_dir = fidle.init('VAE3')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 1.2 - Parameters ### 1.2 - Parameters
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
scale = 1 scale = 1
seed = 123 seed = 123
models_dir = './run/VAE2' models_dir = './run/VAE2'
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fidle.override('scale', 'seed', 'models_dir') fidle.override('scale', 'seed', 'models_dir')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 2 - Get data ## Step 2 - Get data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 ) x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 )
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 3 - Reload best model ## Step 3 - Reload best model
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
vae=VAE() vae=VAE()
vae.reload(f'{models_dir}/models/best_model') vae.reload(f'{models_dir}/models/best_model')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 4 - Image reconstruction ## Step 4 - Image reconstruction
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# ---- Select few images # ---- Select few images
x_show = fidle.utils.pick_dataset(x_data, n=10) x_show = fidle.utils.pick_dataset(x_data, n=10)
# ---- Get latent points and reconstructed images # ---- Get latent points and reconstructed images
z_mean, z_var, z = vae.encoder.predict(x_show, verbose=0) z_mean, z_var, z = vae.encoder.predict(x_show, verbose=0)
x_reconst = vae.decoder.predict(z, verbose=0) x_reconst = vae.decoder.predict(z, verbose=0)
latent_dim = z.shape[1] latent_dim = z.shape[1]
# ---- Show it # ---- Show it
labels=[ str(np.round(z[i],1)) for i in range(10) ] labels=[ str(np.round(z[i],1)) for i in range(10) ]
fidle.utils.subtitle('Originals :') fidle.utils.subtitle('Originals :')
fidle.scrawler.images(x_show, None, indices='all', columns=10, x_size=2,y_size=2, save_as='01-original') fidle.scrawler.images(x_show, None, indices='all', columns=10, x_size=2,y_size=2, save_as='01-original')
fidle.utils.subtitle('Reconstructed :') fidle.utils.subtitle('Reconstructed :')
fidle.scrawler.images(x_reconst, None, indices='all', columns=10, x_size=2,y_size=2, save_as='02-reconstruct') fidle.scrawler.images(x_reconst, None, indices='all', columns=10, x_size=2,y_size=2, save_as='02-reconstruct')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 5 - Visualizing the latent space ## Step 5 - Visualizing the latent space
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
n_show = 20000 n_show = 20000
# ---- Select images # ---- Select images
x_show, y_show = fidle.utils.pick_dataset(x_data,y_data, n=n_show) x_show, y_show = fidle.utils.pick_dataset(x_data,y_data, n=n_show)
# ---- Get latent points # ---- Get latent points
z_mean, z_var, z = vae.encoder.predict(x_show, verbose=0) z_mean, z_var, z = vae.encoder.predict(x_show, verbose=0)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 5.1 - Classic 2d visualisaton ### 5.1 - Classic 2d visualisaton
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fig = plt.figure(figsize=(14, 10)) fig = plt.figure(figsize=(14, 10))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=30) plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=30)
plt.colorbar() plt.colorbar()
fidle.scrawler.save_fig('03-Latent-space') fidle.scrawler.save_fig('03-Latent-space')
plt.show() plt.show()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 5.2 - Simplex visualisaton ### 5.2 - Simplex visualisaton
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
if latent_dim<4: if latent_dim<4:
print('Sorry, This part can only work if the latent space is greater than 3') print('Sorry, This part can only work if the latent space is greater than 3')
else: else:
# ---- Softmax rescale # ---- Softmax rescale
# #
zs = np.exp(z)/np.sum(np.exp(z),axis=1,keepdims=True) zs = np.exp(z)/np.sum(np.exp(z),axis=1,keepdims=True)
# zc = zs * 1/np.max(zs) # zc = zs * 1/np.max(zs)
# ---- Create collection # ---- Create collection
# #
c = Collection(zs, colors=y_show, labels=y_show) c = Collection(zs, colors=y_show, labels=y_show)
c.attrs.markers_colormap = {'colorscale':'Rainbow','cmin':0,'cmax':latent_dim} c.attrs.markers_colormap = {'colorscale':'Rainbow','cmin':0,'cmax':latent_dim}
c.attrs.markers_size = 4 c.attrs.markers_size = 5
c.attrs.markers_border_width = 0 c.attrs.markers_border_width = 0
c.attrs.markers_opacity = 0.7 c.attrs.markers_opacity = 0.8
s = Simplex.build(latent_dim) s = Simplex.build(latent_dim)
s.attrs.width = 1000 s.attrs.width = 1000
s.attrs.height = 1000 s.attrs.height = 1000
s.plot(c) s.plot(c)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 6 - Generate from latent space (latent_dim==2) ## Step 6 - Generate from latent space (latent_dim==2)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
if latent_dim>2: if latent_dim>2:
print('Sorry, This part can only work if the latent space is of dimension 2') print('Sorry, This part can only work if the latent space is of dimension 2')
else: else:
grid_size = 14 grid_size = 14
grid_scale = 1. grid_scale = 1.
# ---- Draw a ppf grid # ---- Draw a ppf grid
grid=[] grid=[]
for y in scipy.stats.norm.ppf(np.linspace(0.99, 0.01, grid_size),scale=grid_scale): for y in scipy.stats.norm.ppf(np.linspace(0.99, 0.01, grid_size),scale=grid_scale):
for x in scipy.stats.norm.ppf(np.linspace(0.01, 0.99, grid_size),scale=grid_scale): for x in scipy.stats.norm.ppf(np.linspace(0.01, 0.99, grid_size),scale=grid_scale):
grid.append( (x,y) ) grid.append( (x,y) )
grid=np.array(grid) grid=np.array(grid)
# ---- Draw latentspoints and grid # ---- Draw latentspoints and grid
fig = plt.figure(figsize=(12, 10)) fig = plt.figure(figsize=(12, 10))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=20) plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=20)
plt.scatter(grid[:, 0] , grid[:, 1], c = 'black', s=60, linewidth=2, marker='+', alpha=1) plt.scatter(grid[:, 0] , grid[:, 1], c = 'black', s=60, linewidth=2, marker='+', alpha=1)
fidle.scrawler.save_fig('04-Latent-grid') fidle.scrawler.save_fig('04-Latent-grid')
plt.show() plt.show()
# ---- Plot grid corresponding images # ---- Plot grid corresponding images
x_reconst = vae.decoder.predict([grid]) x_reconst = vae.decoder.predict([grid])
fidle.scrawler.images(x_reconst, indices='all', columns=grid_size, x_size=0.5,y_size=0.5, y_padding=0,spines_alpha=0.1, save_as='05-Latent-morphing') fidle.scrawler.images(x_reconst, indices='all', columns=grid_size, x_size=0.5,y_size=0.5, y_padding=0,spines_alpha=0.1, save_as='05-Latent-morphing')
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
fidle.end() fidle.end()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
--- ---
<img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img> <img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img>
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
# #
# This file describes the notebooks used by the Fidle training. # This file describes the notebooks used by the Fidle training.
version: 2.1b5 version: 2.1b6
content: notebooks content: notebooks
name: Notebooks Fidle name: Notebooks Fidle
description: All notebooks used by the Fidle training description: All notebooks used by the Fidle training
......
campain: campain:
version: '1.0' version: '1.0'
description: Automatically generated ci profile (13/10/22 00:58:06) description: Automatically generated ci profile (13/10/22 10:19:33)
directory: ./campains/default directory: ./campains/default
existing_notebook: 'remove # remove|skip' existing_notebook: 'remove # remove|skip'
report_template: 'fidle # fidle|default' report_template: 'fidle # fidle|default'
......
campain: campain:
version: 1.0 version: 1.0
description: Full validation of notebooks with scale parameters set to 1 description: Full validation of notebooks with scale parameters set to 1
directory: ./campains/cpu_small directory: ./campains/scale1_settings
existing_notebook: skip existing_notebook: skip
report_template: fidle report_template: fidle
timeout: 6000 timeout: 6000
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment