Skip to content
Snippets Groups Projects
Commit e9c99429 authored by Jean-Luc Parouty's avatar Jean-Luc Parouty
Browse files

Add Wine regression (v2.2.1)

parent f6e9925e
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img>
# <!-- TITLE --> [WINE1] - Wine quality prediction with a Dense Network (DNN)
<!-- DESC --> Another example of regression, with a wine quality prediction!
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- Predict the **quality of wines**, based on their analysis
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The **[Wine Quality datasets](https://archive.ics.uci.edu/ml/datasets/wine+Quality)** are made up of analyses of a large number of wines, with an associated quality (between 0 and 10)
This dataset is provide by :
Paulo Cortez, University of Minho, Guimarães, Portugal, http://www3.dsi.uminho.pt/pcortez
A. Cerdeira, F. Almeida, T. Matos and J. Reis, Viticulture Commission of the Vinho Verde Region(CVRVV), Porto, Portugal, @2009
This dataset can be retreive at [University of California Irvine (UCI)](https://archive-beta.ics.uci.edu/ml/datasets/wine+quality)
Due to privacy and logistic issues, only physicochemical and sensory variables are available
There is no data about grape types, wine brand, wine selling price, etc.
- fixed acidity
- volatile acidity
- citric acid
- residual sugar
- chlorides
- free sulfur dioxide
- total sulfur dioxide
- density
- pH
- sulphates
- alcohol
- quality (score between 0 and 10)
## What we're going to do :
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
# import os
# os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os,sys
from IPython.display import Markdown
from importlib import reload
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('WINE1')
```
%% Cell type:markdown id: tags:
Verbosity during training :
- 0 = silent
- 1 = progress bar
- 2 = one line per epoch
%% Cell type:code id: tags:
``` python
fit_verbosity = 1
dataset_name = 'winequality-red.csv'
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('fit_verbosity', 'dataset_name')
```
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
%% Cell type:code id: tags:
``` python
data = pd.read_csv(f'{datasets_dir}/WineQuality/origine/{dataset_name}', header=0,sep=';')
display(data.head(5).style.format("{0:.2f}"))
print('Missing Data : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
```
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
### 3.1 - Split data
We will use 80% of the data for training and 20% for validation.
x will be the data of the analysis and y the quality
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data = data.sample(frac=1., axis=0) # Shuffle
data_train = data.sample(frac=0.8, axis=0) # get 80 %
data_test = data.drop(data_train.index) # test = all - train
# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('quality', axis=1)
y_train = data_train['quality']
x_test = data_test.drop('quality', axis=1)
y_test = data_test['quality']
print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Cell type:markdown id: tags:
### 3.2 - Data normalization
**Note :**
- All input data must be normalized, train and test.
- To do this we will subtract the mean and divide by the standard deviation.
- But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data.
%% Cell type:code id: tags:
``` python
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
# Convert ou DataFrame to numpy array
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
```
%% Cell type:markdown id: tags:
## Step 4 - Build a model
More informations about :
- [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers)
- [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
- [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
- [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)
%% Cell type:code id: tags:
``` python
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop',
loss = 'mse',
metrics = ['mae', 'mse'] )
return model
```
%% Cell type:markdown id: tags:
## 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
model=get_model_v1( (11,) )
model.summary()
```
%% Cell type:markdown id: tags:
### 5.2 - Add callback
%% Cell type:code id: tags:
``` python
os.makedirs('./run/models', mode=0o750, exist_ok=True)
save_dir = "./run/models/best_model.h5"
savemodel_callback = tf.keras.callbacks.ModelCheckpoint(filepath=save_dir, verbose=0, save_best_only=True)
```
%% Cell type:markdown id: tags:
### 5.3 - Train it
%% Cell type:code id: tags:
``` python
history = model.fit(x_train,
y_train,
epochs = 100,
batch_size = 10,
verbose = fit_verbosity,
validation_data = (x_test, y_test),
callbacks = [savemodel_callback])
```
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Cell type:markdown id: tags:
### 6.2 - Training history
What was the best result during our training ?
%% Cell type:code id: tags:
``` python
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
```
%% Cell type:code id: tags:
``` python
fidle.scrawler.history( history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']}, save_as='01-history')
```
%% Cell type:markdown id: tags:
## Step 7 - Restore a model :
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
loaded_model = tf.keras.models.load_model('./run/models/best_model.h5')
loaded_model.summary()
print("Loaded.")
```
%% Cell type:markdown id: tags:
### 7.2 - Evaluate it :
%% Cell type:code id: tags:
``` python
score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Cell type:markdown id: tags:
### 7.3 - Make a prediction
%% Cell type:code id: tags:
``` python
# ---- Pick n entries from our test set
n = 200
ii = np.random.randint(1,len(x_test),n)
x_sample = x_test[ii]
y_sample = y_test[ii]
```
%% Cell type:code id: tags:
``` python
# ---- Make a predictions
y_pred = loaded_model.predict( x_sample, verbose=2 )
```
%% Cell type:code id: tags:
``` python
# ---- Show it
print('Wine Prediction Real Delta')
for i in range(n):
pred = y_pred[i][0]
real = y_sample[i]
delta = real-pred
print(f'{i:03d} {pred:.2f} {real} {delta:+.2f} ')
```
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img>
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img>
# <!-- TITLE --> [PANDAS1] - Quelques exemples avec Pandas
<!-- DESC --> pandas is another essential tool for the Scientific Python.
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- Understand how to slice a dataset
%% Cell type:markdown id: tags:
## Step 1 - A little cooking with datasets
%% Cell type:code id: tags:
``` python
import pandas as pd
import numpy as np
```
%% Cell type:code id: tags:
``` python
# Get some data
a = np.arange(50).reshape(10,5)
print('Starting data: \n',a)
```
%% Cell type:code id: tags:
``` python
# Create a DataFrame
df_all = pd.DataFrame(a, columns=['A','B','C','D','E'])
print('\nDataFrame :')
display(df_all)
```
%% Cell type:code id: tags:
``` python
# Shuffle data
df_all = df_all.sample(frac=1, axis=0)
print('\nDataFrame randomly shuffled :')
display(df_all)
```
%% Cell type:code id: tags:
``` python
# Get a train part
df_train = df_all.sample(frac=0.8, axis=0)
print('\nTrain set (80%) :')
display(df_train)
```
%% Cell type:code id: tags:
``` python
# Get test set as all - train
df_test = df_all.drop(df_train.index)
print('\nTest set (all - train) :')
display(df_test)
```
%% Cell type:code id: tags:
``` python
x_train = df_train.drop('E', axis=1)
y_train = df_train['E']
x_test = df_test.drop('E', axis=1)
y_test = df_test['E']
display(x_train)
display(y_train)
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img>
%% Cell type:code id:a097c5d3 tags:
%% Cell type:code id:c0502242 tags:
``` python
from IPython.display import display,Markdown
display(Markdown(open('README.md', 'r').read()))
#
# This README is visible under Jupiter Lab ;-)# Automatically generated on : 13/10/22 18:40:09
# This README is visible under Jupiter Lab ;-)# Automatically generated on : 16/10/22 21:33:51
```
%% Output
<a name="top"></a>
[<img width="600px" src="fidle/img/00-Fidle-titre-01.svg"></img>](#top)
<!-- --------------------------------------------------- -->
<!-- To correctly view this README under Jupyter Lab -->
<!-- Open the notebook: README.ipynb! -->
<!-- --------------------------------------------------- -->
## About Fidle
This repository contains all the documents and links of the **Fidle Training** .
Fidle (for Formation Introduction au Deep Learning) is a 3-day training session co-organized
by the 3IA MIAI institute, the CNRS, via the Mission for Transversal and Interdisciplinary
Initiatives (MITI) and the University of Grenoble Alpes (UGA).
The objectives of this training are :
- Understanding the **bases of Deep Learning** neural networks
- Develop a **first experience** through simple and representative examples
- Understanding **Tensorflow/Keras** and **Jupyter lab** technologies
- Apprehend the **academic computing environments** Tier-2 or Tier-1 with powerfull GPU
For more information, see **https://fidle.cnrs.fr** :
- **[Fidle site](https://fidle.cnrs.fr)**
- **[Presentation of the training](https://fidle.cnrs.fr/presentation)**
- **[Program 2021/2022](https://fidle.cnrs.fr/programme)**
- [Subscribe to the list](https://fidle.cnrs.fr/listeinfo), to stay informed !
- [Find us on youtube](https://fidle.cnrs.fr/youtube)
- [Corrected notebooks](https://fidle.cnrs.fr/done)
For more information, you can contact us at :
[<img width="200px" style="vertical-align:middle" src="fidle/img/00-Mail_contact.svg"></img>](#top)
Current Version : <!-- VERSION_BEGIN -->2.2.0<!-- VERSION_END -->
Current Version : <!-- VERSION_BEGIN -->2.2.1<!-- VERSION_END -->
## Course materials
| | | | |
|:--:|:--:|:--:|:--:|
| **[<img width="50px" src="fidle/img/00-Fidle-pdf.svg"></img><br>Course slides](https://fidle.cnrs.fr/supports)**<br>The course in pdf format<br>(12 Mo)| **[<img width="50px" src="fidle/img/00-Notebooks.svg"></img><br>Notebooks](https://fidle.cnrs.fr/notebooks)**<br> &nbsp;&nbsp;&nbsp;&nbsp;Get a Zip or clone this repository &nbsp;&nbsp;&nbsp;&nbsp;<br>(40 Mo)| **[<img width="50px" src="fidle/img/00-Datasets-tar.svg"></img><br>Datasets](https://fidle.cnrs.fr/fidle-datasets.tar)**<br>All the needed datasets<br>(1.2 Go)|**[<img width="50px" src="fidle/img/00-Videos.svg"></img><br>Videos](https://fidle.cnrs.fr/youtube)**<br>&nbsp;&nbsp;&nbsp;&nbsp;Our Youtube channel&nbsp;&nbsp;&nbsp;&nbsp;<br>&nbsp;|
Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)** these notebooks and datasets.
## Jupyter notebooks
<!-- TOC_BEGIN -->
<!-- Automatically generated on : 13/10/22 18:40:08 -->
<!-- Automatically generated on : 16/10/22 21:33:51 -->
### Linear and logistic regression
- **[LINR1](LinearReg/01-Linear-Regression.ipynb)** - [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb)
Low-level implementation, using numpy, of a direct resolution for a linear regression
- **[GRAD1](LinearReg/02-Gradient-descent.ipynb)** - [Linear regression with gradient descent](LinearReg/02-Gradient-descent.ipynb)
Low level implementation of a solution by gradient descent. Basic and stochastic approach.
- **[POLR1](LinearReg/03-Polynomial-Regression.ipynb)** - [Complexity Syndrome](LinearReg/03-Polynomial-Regression.ipynb)
Illustration of the problem of complexity with the polynomial regression
- **[LOGR1](LinearReg/04-Logistic-Regression.ipynb)** - [Logistic regression](LinearReg/04-Logistic-Regression.ipynb)
Simple example of logistic regression with a sklearn solution
### Perceptron Model 1957
- **[PER57](IRIS/01-Simple-Perceptron.ipynb)** - [Perceptron Model 1957](IRIS/01-Simple-Perceptron.ipynb)
Example of use of a Perceptron, with sklearn and IRIS dataset of 1936 !
### Basic regression using DN
- **[BHPD1](BHPD/01-DNN-Regression.ipynb)** - [Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb)
Simple example of a regression with the dataset Boston Housing Prices Dataset (BHPD)
- **[BHPD2](BHPD/02-DNN-Regression-Premium.ipynb)** - [Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)
A more advanced implementation of the precedent example
- **[WINE1](BHPD/03-DNN-Wine-Regression.ipynb)** - [Wine quality prediction with a Dense Network (DNN)](BHPD/03-DNN-Wine-Regression.ipynb)
Another example of regression, with a wine quality prediction!
### Basic classification using a DN
- **[MNIST1](MNIST/01-DNN-MNIST.ipynb)** - [Simple classification with DNN](MNIST/01-DNN-MNIST.ipynb)
An example of classification using a dense neural network for the famous MNIST dataset
- **[MNIST2](MNIST/02-CNN-MNIST.ipynb)** - [Simple classification with CNN](MNIST/02-CNN-MNIST.ipynb)
An example of classification using a convolutional neural network for the famous MNIST dataset
### Images classification with Convolutional Neural Networks (CNN
- **[GTSRB1](GTSRB/01-Preparation-of-data.ipynb)** - [Dataset analysis and preparation](GTSRB/01-Preparation-of-data.ipynb)
Episode 1 : Analysis of the GTSRB dataset and creation of an enhanced dataset
- **[GTSRB2](GTSRB/02-First-convolutions.ipynb)** - [First convolutions](GTSRB/02-First-convolutions.ipynb)
Episode 2 : First convolutions and first classification of our traffic signs
- **[GTSRB3](GTSRB/03-Tracking-and-visualizing.ipynb)** - [Training monitoring](GTSRB/03-Tracking-and-visualizing.ipynb)
Episode 3 : Monitoring, analysis and check points during a training session
- **[GTSRB4](GTSRB/04-Data-augmentation.ipynb)** - [Data augmentation ](GTSRB/04-Data-augmentation.ipynb)
Episode 4 : Adding data by data augmentation when we lack it, to improve our results
- **[GTSRB5](GTSRB/05-Full-convolutions.ipynb)** - [Full convolutions](GTSRB/05-Full-convolutions.ipynb)
Episode 5 : A lot of models, a lot of datasets and a lot of results.
- **[GTSRB6](GTSRB/06-Notebook-as-a-batch.ipynb)** - [Full convolutions as a batch](GTSRB/06-Notebook-as-a-batch.ipynb)
Episode 6 : To compute bigger, use your notebook in batch mode
- **[GTSRB7](GTSRB/07-Show-report.ipynb)** - [Batch reports](GTSRB/07-Show-report.ipynb)
Episode 7 : Displaying our jobs report, and the winner is...
- **[GTSRB10](GTSRB/batch_oar.sh)** - [OAR batch script submission](GTSRB/batch_oar.sh)
Bash script for an OAR batch submission of an ipython code
- **[GTSRB11](GTSRB/batch_slurm.sh)** - [SLURM batch script](GTSRB/batch_slurm.sh)
Bash script for a Slurm batch submission of an ipython code
### Sentiment analysis with word embeddin
- **[IMDB1](IMDB/01-One-hot-encoding.ipynb)** - [Sentiment analysis with hot-one encoding](IMDB/01-One-hot-encoding.ipynb)
A basic example of sentiment analysis with sparse encoding, using a dataset from Internet Movie Database (IMDB)
- **[IMDB2](IMDB/02-Keras-embedding.ipynb)** - [Sentiment analysis with text embedding](IMDB/02-Keras-embedding.ipynb)
A very classical example of word embedding with a dataset from Internet Movie Database (IMDB)
- **[IMDB3](IMDB/03-Prediction.ipynb)** - [Reload and reuse a saved model](IMDB/03-Prediction.ipynb)
Retrieving a saved model to perform a sentiment analysis (movie review)
- **[IMDB4](IMDB/04-Show-vectors.ipynb)** - [Reload embedded vectors](IMDB/04-Show-vectors.ipynb)
Retrieving embedded vectors from our trained model
- **[IMDB5](IMDB/05-LSTM-Keras.ipynb)** - [Sentiment analysis with a RNN network](IMDB/05-LSTM-Keras.ipynb)
Still the same problem, but with a network combining embedding and RNN
### Time series with Recurrent Neural Network (RNN
- **[LADYB1](SYNOP/LADYB1-Ladybug.ipynb)** - [Prediction of a 2D trajectory via RNN](SYNOP/LADYB1-Ladybug.ipynb)
Artificial dataset generation and prediction attempt via a recurrent network
- **[SYNOP1](SYNOP/SYNOP1-Preparation-of-data.ipynb)** - [Preparation of data](SYNOP/SYNOP1-Preparation-of-data.ipynb)
Episode 1 : Data analysis and preparation of a usuable meteorological dataset (SYNOP)
- **[SYNOP2](SYNOP/SYNOP2-First-predictions.ipynb)** - [First predictions at 3h](SYNOP/SYNOP2-First-predictions.ipynb)
Episode 2 : RNN training session for weather prediction attempt at 3h
- **[SYNOP3](SYNOP/SYNOP3-12h-predictions.ipynb)** - [12h predictions](SYNOP/SYNOP3-12h-predictions.ipynb)
Episode 3: Attempt to predict in a more longer term
### Sentiment analysis with transformer
- **[TRANS1](Transformers/01-Distilbert.ipynb)** - [IMDB, Sentiment analysis with Transformers ](Transformers/01-Distilbert.ipynb)
Using a Tranformer to perform a sentiment analysis (IMDB) - Jean Zay version
- **[TRANS2](Transformers/02-distilbert_colab.ipynb)** - [IMDB, Sentiment analysis with Transformers ](Transformers/02-distilbert_colab.ipynb)
Using a Tranformer to perform a sentiment analysis (IMDB) - Colab version
### Unsupervised learning with an autoencoder neural network (AE
- **[AE1](AE/01-Prepare-MNIST-dataset.ipynb)** - [Prepare a noisy MNIST dataset](AE/01-Prepare-MNIST-dataset.ipynb)
Episode 1: Preparation of a noisy MNIST dataset
- **[AE2](AE/02-AE-with-MNIST.ipynb)** - [Building and training an AE denoiser model](AE/02-AE-with-MNIST.ipynb)
Episode 1 : Construction of a denoising autoencoder and training of it with a noisy MNIST dataset.
- **[AE3](AE/03-AE-with-MNIST-post.ipynb)** - [Playing with our denoiser model](AE/03-AE-with-MNIST-post.ipynb)
Episode 2 : Using the previously trained autoencoder to denoise data
- **[AE4](AE/04-ExtAE-with-MNIST.ipynb)** - [Denoiser and classifier model](AE/04-ExtAE-with-MNIST.ipynb)
Episode 4 : Construction of a denoiser and classifier model
- **[AE5](AE/05-ExtAE-with-MNIST.ipynb)** - [Advanced denoiser and classifier model](AE/05-ExtAE-with-MNIST.ipynb)
Episode 5 : Construction of an advanced denoiser and classifier model
### Generative network with Variational Autoencoder (VAE
- **[VAE1](VAE/01-VAE-with-MNIST.ipynb)** - [First VAE, using functional API (MNIST dataset)](VAE/01-VAE-with-MNIST.ipynb)
Construction and training of a VAE, using functional APPI, with a latent space of small dimension.
- **[VAE2](VAE/02-VAE-with-MNIST.ipynb)** - [VAE, using a custom model class (MNIST dataset)](VAE/02-VAE-with-MNIST.ipynb)
Construction and training of a VAE, using model subclass, with a latent space of small dimension.
- **[VAE3](VAE/03-VAE-with-MNIST-post.ipynb)** - [Analysis of the VAE's latent space of MNIST dataset](VAE/03-VAE-with-MNIST-post.ipynb)
Visualization and analysis of the VAE's latent space of the dataset MNIST
- **[VAE5](VAE/05-About-CelebA.ipynb)** - [Another game play : About the CelebA dataset](VAE/05-About-CelebA.ipynb)
Episode 1 : Presentation of the CelebA dataset and problems related to its size
- **[VAE6](VAE/06-Prepare-CelebA-datasets.ipynb)** - [Generation of a clustered dataset](VAE/06-Prepare-CelebA-datasets.ipynb)
Episode 2 : Analysis of the CelebA dataset and creation of an clustered and usable dataset
- **[VAE7](VAE/07-Check-CelebA.ipynb)** - [Checking the clustered dataset](VAE/07-Check-CelebA.ipynb)
Episode : 3 Clustered dataset verification and testing of our datagenerator
- **[VAE8](VAE/08-VAE-with-CelebA-128x128.ipynb)** - [Training session for our VAE with 128x128 images](VAE/08-VAE-with-CelebA-128x128.ipynb)
Episode 4 : Training with our clustered datasets in notebook or batch mode
- **[VAE9](VAE/09-VAE-with-CelebA-192x160.ipynb)** - [Training session for our VAE with 192x160 images](VAE/09-VAE-with-CelebA-192x160.ipynb)
Episode 4 : Training with our clustered datasets in notebook or batch mode
- **[VAE10](VAE/10-VAE-with-CelebA-post.ipynb)** - [Data generation from latent space](VAE/10-VAE-with-CelebA-post.ipynb)
Episode 5 : Exploring latent space to generate new data
- **[VAE10](VAE/batch_slurm.sh)** - [SLURM batch script](VAE/batch_slurm.sh)
Bash script for SLURM batch submission of VAE8 notebooks
### Generative Adversarial Networks (GANs
- **[SHEEP1](DCGAN/01-DCGAN-Draw-me-a-sheep.ipynb)** - [A first DCGAN to Draw a Sheep](DCGAN/01-DCGAN-Draw-me-a-sheep.ipynb)
Episode 1 : Draw me a sheep, revisited with a DCGAN
- **[SHEEP2](DCGAN/02-WGANGP-Draw-me-a-sheep.ipynb)** - [A WGAN-GP to Draw a Sheep](DCGAN/02-WGANGP-Draw-me-a-sheep.ipynb)
Episode 2 : Draw me a sheep, revisited with a WGAN-GP
### Deep Reinforcement Learning (DRL
- **[DRL1](DRL/FIDLE_DQNfromScratch.ipynb)** - [Solving CartPole with DQN](DRL/FIDLE_DQNfromScratch.ipynb)
Using a a Deep Q-Network to play CartPole - an inverted pendulum problem (PyTorch)
- **[DRL2](DRL/FIDLE_rl_baselines_zoo.ipynb)** - [RL Baselines3 Zoo: Training in Colab](DRL/FIDLE_rl_baselines_zoo.ipynb)
Demo of Stable baseline3 with Colab
### Miscellaneous
- **[ACTF1](Misc/Activation-Functions.ipynb)** - [Activation functions](Misc/Activation-Functions.ipynb)
Some activation functions, with their derivatives.
- **[NP1](Misc/Numpy.ipynb)** - [A short introduction to Numpy](Misc/Numpy.ipynb)
Numpy is an essential tool for the Scientific Python.
- **[SCRATCH1](Misc/Scratchbook.ipynb)** - [Scratchbook](Misc/Scratchbook.ipynb)
A scratchbook for small examples
- **[TSB1](Misc/Using-Tensorboard.ipynb)** - [Tensorboard with/from Jupyter ](Misc/Using-Tensorboard.ipynb)
4 ways to use Tensorboard from the Jupyter environment
- **[PANDAS1](Misc/Using-pandas.ipynb)** - [Quelques exemples avec Pandas](Misc/Using-pandas.ipynb)
pandas is another essential tool for the Scientific Python.
<!-- TOC_END -->
## Installation
Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)** these notebooks and datasets.
## Licence
[<img width="100px" src="fidle/img/00-fidle-CC BY-NC-SA.svg"></img>](https://creativecommons.org/licenses/by-nc-sa/4.0/)
\[en\] Attribution - NonCommercial - ShareAlike 4.0 International (CC BY-NC-SA 4.0)
\[Fr\] Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
See [License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
See [Disclaimer](https://creativecommons.org/licenses/by-nc-sa/4.0/#).
----
[<img width="80px" src="fidle/img/00-Fidle-logo-01.svg"></img>](#top)
......
......@@ -31,7 +31,7 @@ For more information, see **https://fidle.cnrs.fr** :
For more information, you can contact us at :
[<img width="200px" style="vertical-align:middle" src="fidle/img/00-Mail_contact.svg"></img>](#top)
Current Version : <!-- VERSION_BEGIN -->2.2.0<!-- VERSION_END -->
Current Version : <!-- VERSION_BEGIN -->2.2.1<!-- VERSION_END -->
## Course materials
......@@ -46,7 +46,7 @@ Have a look about **[How to get and install](https://fidle.cnrs.fr/installation)
## Jupyter notebooks
<!-- TOC_BEGIN -->
<!-- Automatically generated on : 13/10/22 18:40:08 -->
<!-- Automatically generated on : 16/10/22 21:33:51 -->
### Linear and logistic regression
- **[LINR1](LinearReg/01-Linear-Regression.ipynb)** - [Linear regression with direct resolution](LinearReg/01-Linear-Regression.ipynb)
......@@ -67,6 +67,8 @@ Example of use of a Perceptron, with sklearn and IRIS dataset of 1936 !
Simple example of a regression with the dataset Boston Housing Prices Dataset (BHPD)
- **[BHPD2](BHPD/02-DNN-Regression-Premium.ipynb)** - [Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)
A more advanced implementation of the precedent example
- **[WINE1](BHPD/03-DNN-Wine-Regression.ipynb)** - [Wine quality prediction with a Dense Network (DNN)](BHPD/03-DNN-Wine-Regression.ipynb)
Another example of regression, with a wine quality prediction!
### Basic classification using a DN
- **[MNIST1](MNIST/01-DNN-MNIST.ipynb)** - [Simple classification with DNN](MNIST/01-DNN-MNIST.ipynb)
......@@ -177,6 +179,8 @@ Numpy is an essential tool for the Scientific Python.
A scratchbook for small examples
- **[TSB1](Misc/Using-Tensorboard.ipynb)** - [Tensorboard with/from Jupyter ](Misc/Using-Tensorboard.ipynb)
4 ways to use Tensorboard from the Jupyter environment
- **[PANDAS1](Misc/Using-pandas.ipynb)** - [Quelques exemples avec Pandas](Misc/Using-pandas.ipynb)
pandas is another essential tool for the Scientific Python.
<!-- TOC_END -->
......
......@@ -13,7 +13,7 @@
#
# This file describes the notebooks used by the Fidle training.
version: 2.2.0
version: 2.2.1
content: notebooks
name: Notebooks Fidle
description: All notebooks used by the Fidle training
......
_metadata_:
version: '1.0'
output_tag: ==ci==
save_figs: true
description: Full profile for CI
#
# ------ LinearReg -------------------------------------------------
#
LINR1:
notebook_id: LINR1
notebook_dir: LinearReg
notebook_src: 01-Linear-Regression.ipynb
notebook_tag: default
GRAD1:
notebook_id: GRAD1
notebook_dir: LinearReg
notebook_src: 02-Gradient-descent.ipynb
notebook_tag: default
POLR1:
notebook_id: POLR1
notebook_dir: LinearReg
notebook_src: 03-Polynomial-Regression.ipynb
notebook_tag: default
LOGR1:
notebook_id: LOGR1
notebook_dir: LinearReg
notebook_src: 04-Logistic-Regression.ipynb
notebook_tag: default
PER57:
notebook_id: PER57
notebook_dir: IRIS
notebook_src: 01-Simple-Perceptron.ipynb
notebook_tag: default
#
# ------ BHPD ------------------------------------------------------
#
BHPD1:
notebook_id: BHPD1
notebook_dir: BHPD
notebook_src: 01-DNN-Regression.ipynb
notebook_tag: default
BHPD2:
notebook_id: BHPD2
notebook_dir: BHPD
notebook_src: 02-DNN-Regression-Premium.ipynb
notebook_tag: default
#
# ------ MNIST -----------------------------------------------------
#
MNIST1:
notebook_id: MNIST1
notebook_dir: MNIST
notebook_src: 01-DNN-MNIST.ipynb
notebook_tag: default
#
# ------ GTSRB -----------------------------------------------------
#
GTSRB1:
notebook_id: GTSRB1
notebook_dir: GTSRB
notebook_src: 01-Preparation-of-data.ipynb
notebook_tag: default
overrides:
scale: 0.01
output_dir: './data'
GTSRB2:
notebook_id: GTSRB2
notebook_dir: GTSRB
notebook_src: 02-First-convolutions.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB2_ci
enhanced_dir: './data'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
GTSRB3:
notebook_id: GTSRB3
notebook_dir: GTSRB
notebook_src: 03-Tracking-and-visualizing.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB3_ci
enhanced_dir: './data'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
GTSRB4:
notebook_id: GTSRB4
notebook_dir: GTSRB
notebook_src: 04-Data-augmentation.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB4_ci
enhanced_dir: './data'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
GTSRB5_r1:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =1==ci==
overrides:
run_dir: ./run/GTSRB5_ci
enhanced_dir: './data'
datasets: "['set-24x24-L', 'set-24x24-RGB', 'set-48x48-L', 'set-48x48-RGB', 'set-24x24-L-LHE', 'set-24x24-RGB-HE', 'set-48x48-L-LHE', 'set-48x48-RGB-HE']"
models: "{'v1':'get_model_v1', 'v2':'get_model_v2', 'v3':'get_model_v3'}"
batch_size: 64
epochs: 16
scale: 1
with_datagen: False
verbose: 0
GTSRB5_r2:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =2==ci==
overrides:
run_dir: ./run/GTSRB5_ci
enhanced_dir: './data'
datasets: "['set-48x48-L', 'set-48x48-RGB']"
models: "{'v2':'get_model_v2', 'v3':'get_model_v3'}"
batch_size: 64
epochs: 16
scale: 1
with_datagen: False
verbose: 0
GTSRB6:
notebook_id: GTSRB6
notebook_dir: GTSRB
notebook_src: 06-Notebook-as-a-batch.ipynb
notebook_tag: default
GTSRB7:
notebook_id: GTSRB7
notebook_dir: GTSRB
notebook_src: 07-Show-report.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB7_ci
report_dir: ./run/GTSRB5_ci
#
# ------ IMDB ------------------------------------------------------
#
IMDB1:
notebook_id: IMDB1
notebook_dir: IMDB
notebook_src: 01-Embedding-Keras.ipynb
notebook_tag: default
IMDB2:
notebook_id: IMDB1
notebook_dir: IMDB
notebook_src: 02-Prediction.ipynb
notebook_tag: default
IMDB3:
notebook_id: IMDB1
notebook_dir: IMDB
notebook_src: 03-LSTM-Keras.ipynb
notebook_tag: default
#
# ------ SYNOP -----------------------------------------------------
#
SYNOP1:
notebook_id: SYNOP1
notebook_dir: SYNOP
notebook_src: 01-Preparation-of-data.ipynb
notebook_tag: default
SYNOP2:
notebook_id: SYNOP2
notebook_dir: SYNOP
notebook_src: 02-First-predictions.ipynb
notebook_tag: default
overrides:
scale: 0.5
train_prop: 0.8
sequence_len: 16
batch_size: 32
epochs: 10
SYNOP3:
notebook_id: SYNOP3
notebook_dir: SYNOP
notebook_src: 03-12h-predictions.ipynb
notebook_tag: default
overrides:
iterations: 4
scale: 1
train_prop: 0.8
sequence_len: 16
batch_size: 32
epochs: 10
#
# ------ AE --------------------------------------------------------
#
AE1:
notebook_id: AE1
notebook_dir: AE
notebook_src: 01-AE-with-MNIST.ipynb
notebook_tag: default
AE2:
notebook_id: AE2
notebook_dir: AE
notebook_src: 02-AE-with-MNIST-post.ipynb
notebook_tag: default
#
# ------ VAE -------------------------------------------------------
#
VAE1:
notebook_id: VAE1
notebook_dir: VAE
notebook_src: 01-VAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE1_ci
scale: 1
latent_dim: 2
r_loss_factor: 0.994
batch_size: 64
epochs: 10
VAE2:
notebook_id: VAE2
notebook_dir: VAE
notebook_src: 02-VAE-with-MNIST-post.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE1_ci
VAE5:
notebook_id: VAE5
notebook_dir: VAE
notebook_src: 05-About-CelebA.ipynb
notebook_tag: default
VAE6:
notebook_id: VAE6
notebook_dir: VAE
notebook_src: 06-Prepare-CelebA-datasets.ipynb
notebook_tag: default
overrides:
scale: 0.02
image_size: '(192,160)'
output_dir: ./data
exit_if_exist: False
VAE7:
notebook_id: VAE7
notebook_dir: VAE
notebook_src: 07-Check-CelebA.ipynb
notebook_tag: default
overrides:
image_size: '(192,160)'
enhanced_dir: './data'
VAE8:
notebook_id: VAE8
notebook_dir: VAE
notebook_src: 08-VAE-with-CelebA.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE8_ci
scale: 1
image_size: '(192,160)'
enhanced_dir: './data'
latent_dim: 300
r_loss_factor: 0.6
batch_size: 64
epochs: 15
VAE9:
notebook_id: VAE9
notebook_dir: VAE
notebook_src: 09-VAE-with-CelebA-post.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE8_ci
image_size: '(192,160)'
enhanced_dir: './data'
#
# ------ Misc ------------------------------------------------------
#
ACTF1:
notebook_id: ACTF1
notebook_dir: Misc
notebook_src: Activation-Functions.ipynb
notebook_tag: default
NP1:
notebook_id: NP1
notebook_dir: Misc
notebook_src: Numpy.ipynb
notebook_tag: default
TSB1:
notebook_id: TSB1
notebook_dir: Misc
notebook_src: Using-Tensorboard.ipynb
notebook_tag: default
_metadata_:
version: '1.0'
output_tag: ==done==
save_figs: true
description: Full profile for GPU
#
# ------ LinearReg -------------------------------------------------
#
Nb_LINR1:
notebook_id: LINR1
notebook_dir: LinearReg
notebook_src: 01-Linear-Regression.ipynb
notebook_tag: default
Nb_GRAD1:
notebook_id: GRAD1
notebook_dir: LinearReg
notebook_src: 02-Gradient-descent.ipynb
notebook_tag: default
Nb_POLR1:
notebook_id: POLR1
notebook_dir: LinearReg
notebook_src: 03-Polynomial-Regression.ipynb
notebook_tag: default
Nb_LOGR1:
notebook_id: LOGR1
notebook_dir: LinearReg
notebook_src: 04-Logistic-Regression.ipynb
notebook_tag: default
Nb_PER57:
notebook_id: PER57
notebook_dir: IRIS
notebook_src: 01-Simple-Perceptron.ipynb
notebook_tag: default
#
# ------ BHPD ------------------------------------------------------
#
Nb_BHPD1:
notebook_id: BHPD1
notebook_dir: BHPD
notebook_src: 01-DNN-Regression.ipynb
notebook_tag: default
Nb_BHPD2:
notebook_id: BHPD2
notebook_dir: BHPD
notebook_src: 02-DNN-Regression-Premium.ipynb
notebook_tag: default
#
# ------ MNIST -----------------------------------------------------
#
Nb_MNIST1:
notebook_id: MNIST1
notebook_dir: MNIST
notebook_src: 01-DNN-MNIST.ipynb
notebook_tag: default
Nb_MNIST2:
notebook_id: MNIST2
notebook_dir: MNIST
notebook_src: 02-CNN-MNIST.ipynb
notebook_tag: default
#
# ------ GTSRB -----------------------------------------------------
#
Nb_GTSRB1:
notebook_id: GTSRB1
notebook_dir: GTSRB
notebook_src: 01-Preparation-of-data.ipynb
notebook_tag: default
overrides:
scale: 0.05
output_dir: ./data
Nb_GTSRB2:
notebook_id: GTSRB2
notebook_dir: GTSRB
notebook_src: 02-First-convolutions.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB2_done
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
Nb_GTSRB3:
notebook_id: GTSRB3
notebook_dir: GTSRB
notebook_src: 03-Tracking-and-visualizing.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB3_done
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
Nb_GTSRB4:
notebook_id: GTSRB4
notebook_dir: GTSRB
notebook_src: 04-Data-augmentation.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB4_done
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
Nb_GTSRB5_r1:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =1==done==
overrides:
run_dir: ./run/GTSRB5_done
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
datasets: "['set-24x24-L', 'set-24x24-RGB', 'set-48x48-L', 'set-48x48-RGB', 'set-24x24-L-LHE', 'set-24x24-RGB-HE', 'set-48x48-L-LHE', 'set-48x48-RGB-HE']"
models: "{'v1':'get_model_v1', 'v2':'get_model_v2', 'v3':'get_model_v3'}"
batch_size: 64
epochs: 16
scale: 1
with_datagen: False
verbose: 0
Nb_GTSRB5_r2:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =2==done==
overrides:
run_dir: ./run/GTSRB5_done
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
datasets: "['set-24x24-L', 'set-24x24-RGB', 'set-48x48-L', 'set-48x48-RGB', 'set-24x24-L-LHE', 'set-24x24-RGB-HE', 'set-48x48-L-LHE', 'set-48x48-RGB-HE']"
models: "{'v1':'get_model_v1', 'v2':'get_model_v2', 'v3':'get_model_v3'}"
batch_size: 64
epochs: 16
scale: 1
with_datagen: False
verbose: 0
Nb_GTSRB5_r3:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =3==done==
overrides:
run_dir: ./run/GTSRB5_done
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
datasets: "['set-48x48-L', 'set-48x48-RGB']"
models: "{'v2':'get_model_v2', 'v3':'get_model_v3'}"
batch_size: 64
epochs: 16
scale: 1
with_datagen: True
verbose: 0
Nb_GTSRB6:
notebook_id: GTSRB6
notebook_dir: GTSRB
notebook_src: 06-Notebook-as-a-batch.ipynb
notebook_tag: default
Nb_GTSRB7:
notebook_id: GTSRB7
notebook_dir: GTSRB
notebook_src: 07-Show-report.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB7_done
report_dir: ./run/GTSRB5_done
#
# ------ IMDB ------------------------------------------------------
#
Nb_IMDB1:
notebook_id: IMDB1
notebook_dir: IMDB
notebook_src: 01-One-hot-encoding.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
hide_most_frequently: default
batch_size: default
epochs: default
Nb_IMDB2:
notebook_id: IMDB2
notebook_dir: IMDB
notebook_src: 02-Keras-embedding.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
hide_most_frequently: default
review_len: default
dense_vector_size: default
batch_size: default
epochs: default
output_dir: default
Nb_IMDB3:
notebook_id: IMDB3
notebook_dir: IMDB
notebook_src: 03-Prediction.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
review_len: default
dictionaries_dir: default
Nb_IMDB4:
notebook_id: IMDB4
notebook_dir: IMDB
notebook_src: 04-Show-vectors.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
review_len: default
dictionaries_dir: default
Nb_IMDB5:
notebook_id: IMDB5
notebook_dir: IMDB
notebook_src: 05-LSTM-Keras.ipynb
notebook_tag: default
#
# ------ SYNOP -----------------------------------------------------
#
Nb_LADYB1:
notebook_id: LADYB1
notebook_dir: SYNOP
notebook_src: LADYB1-Ladybug.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: default
train_prop: default
sequence_len: default
predict_len: default
batch_size: default
epochs: default
Nb_SYNOP1:
notebook_id: SYNOP1
notebook_dir: SYNOP
notebook_src: SYNOP1-Preparation-of-data.ipynb
notebook_tag: default
overrides:
output_dir: default
Nb_SYNOP2:
notebook_id: SYNOP2
notebook_dir: SYNOP
notebook_src: SYNOP2-First-predictions.ipynb
notebook_tag: default
overrides:
scale: default
train_prop: default
sequence_len: default
batch_size: default
epochs: default
Nb_SYNOP3:
notebook_id: SYNOP3
notebook_dir: SYNOP
notebook_src: SYNOP3-12h-predictions.ipynb
notebook_tag: default
overrides:
iterations: default
scale: default
train_prop: default
sequence_len: default
batch_size: default
epochs: default
#
# ------ AE --------------------------------------------------------
#
Nb_AE1:
notebook_id: AE1
notebook_dir: AE
notebook_src: 01-AE-with-MNIST.ipynb
notebook_tag: default
Nb_AE2:
notebook_id: AE2
notebook_dir: AE
notebook_src: 02-AE-with-MNIST-post.ipynb
notebook_tag: default
#
# ------ VAE -------------------------------------------------------
#
Nb_VAE1:
notebook_id: VAE1
notebook_dir: VAE
notebook_src: 01-VAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE1_done
scale: 1
latent_dim: 2
r_loss_factor: 0.994
batch_size: 64
epochs: 10
Nb_VAE2:
notebook_id: VAE2
notebook_dir: VAE
notebook_src: 02-VAE-with-MNIST-post.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE1_done
Nb_VAE5:
notebook_id: VAE5
notebook_dir: VAE
notebook_src: 05-About-CelebA.ipynb
notebook_tag: default
Nb_VAE6:
notebook_id: VAE6
notebook_dir: VAE
notebook_src: 06-Prepare-CelebA-datasets.ipynb
notebook_tag: default
overrides:
scale: 0.01
image_size: '(192,160)'
output_dir: ./data
exit_if_exist: False
Nb_VAE7:
notebook_id: VAE7
notebook_dir: VAE
notebook_src: 07-Check-CelebA.ipynb
notebook_tag: default
overrides:
image_size: '(192,160)'
enhanced_dir: '{datasets_dir}/celeba/enhanced'
Nb_VAE8:
notebook_id: VAE8
notebook_dir: VAE
notebook_src: 08-VAE-with-CelebA.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE8_done
scale: 1
image_size: '(192,160)'
enhanced_dir: '{datasets_dir}/celeba/enhanced'
latent_dim: 300
r_loss_factor: 0.6
batch_size: 64
epochs: 15
Nb_VAE9:
notebook_id: VAE9
notebook_dir: VAE
notebook_src: 09-VAE-with-CelebA-post.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE8_done
image_size: '(192,160)'
enhanced_dir: '{datasets_dir}/celeba/enhanced'
#
# ------ Misc ------------------------------------------------------
#
Nb_ACTF1:
notebook_id: ACTF1
notebook_dir: Misc
notebook_src: Activation-Functions.ipynb
notebook_tag: default
Nb_NP1:
notebook_id: NP1
notebook_dir: Misc
notebook_src: Numpy.ipynb
notebook_tag: default
Nb_TSB1:
notebook_id: TSB1
notebook_dir: Misc
notebook_src: Using-Tensorboard.ipynb
notebook_tag: default
_metadata_:
version: '1.0'
description: Full run on a small cpu
output_tag: ==ci==
output_ipynb: ./fidle/run/ci/ipynb
output_html: ./fidle/run/ci/html
report_json: ./fidle/run/ci/report.json
report_error: ./fidle/run/ci/error.txt
environment_vars:
FIDLE_SAVE_FIGS: true
TF_CPP_MIN_LOG_LEVEL: 2
#
# ------ LinearReg -------------------------------------------------
#
Nb_LINR1:
notebook_id: LINR1
notebook_dir: LinearReg
notebook_src: 01-Linear-Regression.ipynb
notebook_tag: default
Nb_GRAD1:
notebook_id: GRAD1
notebook_dir: LinearReg
notebook_src: 02-Gradient-descent.ipynb
notebook_tag: default
Nb_POLR1:
notebook_id: POLR1
notebook_dir: LinearReg
notebook_src: 03-Polynomial-Regression.ipynb
notebook_tag: default
Nb_LOGR1:
notebook_id: LOGR1
notebook_dir: LinearReg
notebook_src: 04-Logistic-Regression.ipynb
notebook_tag: default
Nb_PER57:
notebook_id: PER57
notebook_dir: IRIS
notebook_src: 01-Simple-Perceptron.ipynb
notebook_tag: default
#
# ------ BHPD ------------------------------------------------------
#
Nb_BHPD1:
notebook_id: BHPD1
notebook_dir: BHPD
notebook_src: 01-DNN-Regression.ipynb
notebook_tag: default
overrides:
fit_verbosity: 2
Nb_BHPD2:
notebook_id: BHPD2
notebook_dir: BHPD
notebook_src: 02-DNN-Regression-Premium.ipynb
notebook_tag: default
overrides:
fit_verbosity: 2
#
# ------ MNIST -----------------------------------------------------
#
Nb_MNIST1:
notebook_id: MNIST1
notebook_dir: MNIST
notebook_src: 01-DNN-MNIST.ipynb
notebook_tag: default
overrides:
fit_verbosity: 2
Nb_MNIST2:
notebook_id: MNIST2
notebook_dir: MNIST
notebook_src: 02-CNN-MNIST.ipynb
notebook_tag: default
overrides:
fit_verbosity: 2
#
# ------ GTSRB -----------------------------------------------------
#
Nb_GTSRB1:
notebook_id: GTSRB1
notebook_dir: GTSRB
notebook_src: 01-Preparation-of-data.ipynb
notebook_tag: default
overrides:
scale: 0.01
output_dir: ./data
progress_verbosity: 2
Nb_GTSRB2:
notebook_id: GTSRB2
notebook_dir: GTSRB
notebook_src: 02-First-convolutions.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB2_done
enhanced_dir: './data'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
fit_verbosity: 2
Nb_GTSRB3:
notebook_id: GTSRB3
notebook_dir: GTSRB
notebook_src: 03-Tracking-and-visualizing.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB3_done
enhanced_dir: './data'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
fit_verbosity: 2
Nb_GTSRB4:
notebook_id: GTSRB4
notebook_dir: GTSRB
notebook_src: 04-Data-augmentation.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB4_done
enhanced_dir: './data'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
fit_verbosity: 2
Nb_GTSRB5_r1:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =1==ci==
overrides:
run_dir: ./run/GTSRB5_done
enhanced_dir: './data'
datasets: "['set-24x24-L', 'set-24x24-RGB']"
models: "{'v1':'get_model_v1', 'v2':'get_model_v2'}"
batch_size: 64
epochs: 5
scale: 1
with_datagen: False
fit_verbosity: 0
Nb_GTSRB6:
notebook_id: GTSRB6
notebook_dir: GTSRB
notebook_src: 06-Notebook-as-a-batch.ipynb
notebook_tag: default
Nb_GTSRB7:
notebook_id: GTSRB7
notebook_dir: GTSRB
notebook_src: 07-Show-report.ipynb
notebook_tag: default
overrides:
run_dir: ./run/GTSRB7_done
report_dir: ./run/GTSRB5_done
#
# ------ IMDB ------------------------------------------------------
#
Nb_IMDB1:
notebook_id: IMDB1
notebook_dir: IMDB
notebook_src: 01-One-hot-encoding.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
hide_most_frequently: default
batch_size: default
epochs: default
fit_verbosity: 2
Nb_IMDB2:
notebook_id: IMDB2
notebook_dir: IMDB
notebook_src: 02-Keras-embedding.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
hide_most_frequently: default
review_len: default
dense_vector_size: default
batch_size: default
epochs: default
output_dir: default
Nb_IMDB3:
notebook_id: IMDB3
notebook_dir: IMDB
notebook_src: 03-Prediction.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
review_len: default
dictionaries_dir: default
Nb_IMDB4:
notebook_id: IMDB4
notebook_dir: IMDB
notebook_src: 04-Show-vectors.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
review_len: default
dictionaries_dir: default
Nb_IMDB5:
notebook_id: IMDB5
notebook_dir: IMDB
notebook_src: 05-LSTM-Keras.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
hide_most_frequently: default
review_len: default
dense_vector_size: default
batch_size: default
epochs: default
fit_verbosity: 2
scale: .1
#
# ------ SYNOP -----------------------------------------------------
#
Nb_LADYB1:
notebook_id: LADYB1
notebook_dir: SYNOP
notebook_src: LADYB1-Ladybug.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 0.1
train_prop: default
sequence_len: default
predict_len: default
batch_size: default
epochs: default
fit_verbosity: 2
Nb_SYNOP1:
notebook_id: SYNOP1
notebook_dir: SYNOP
notebook_src: SYNOP1-Preparation-of-data.ipynb
notebook_tag: default
overrides:
output_dir: default
Nb_SYNOP2:
notebook_id: SYNOP2
notebook_dir: SYNOP
notebook_src: SYNOP2-First-predictions.ipynb
notebook_tag: default
overrides:
scale: 0.1
train_prop: default
sequence_len: default
batch_size: default
epochs: default
Nb_SYNOP3:
notebook_id: SYNOP3
notebook_dir: SYNOP
notebook_src: SYNOP3-12h-predictions.ipynb
notebook_tag: default
overrides:
iterations: default
scale: default
train_prop: default
sequence_len: default
#
# ------ AE --------------------------------------------------------
#
Nb_AE1:
notebook_id: AE1
notebook_dir: AE
notebook_src: 01-Prepare-MNIST-dataset.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 0.02
prepared_dataset: default
progress_verbosity: 2
Nb_AE2:
notebook_id: AE2
notebook_dir: AE
notebook_src: 02-AE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
prepared_dataset: default
dataset_seed: default
scale: default
latent_dim: default
train_prop: default
batch_size: default
epochs: default
Nb_AE3:
notebook_id: AE3
notebook_dir: AE
notebook_src: 03-AE-with-MNIST-post.ipynb
notebook_tag: default
overrides:
run_dir: default
prepared_dataset: default
dataset_seed: default
scale: default
train_prop: default
Nb_AE4:
notebook_id: AE4
notebook_dir: AE
notebook_src: 04-ExtAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
prepared_dataset: default
dataset_seed: default
scale: default
latent_dim: default
train_prop: default
batch_size: default
epochs: default
Nb_AE5:
notebook_id: AE5
notebook_dir: AE
notebook_src: 05-ExtAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
prepared_dataset: default
dataset_seed: default
scale: default
latent_dim: default
train_prop: default
batch_size: default
epochs: default
#
# ------ VAE -------------------------------------------------------
#
Nb_VAE1:
notebook_id: VAE1
notebook_dir: VAE
notebook_src: 01-VAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
latent_dim: default
loss_weights: default
scale: 0.01
seed: default
batch_size: default
epochs: default
fit_verbosity: 2
Nb_VAE2:
notebook_id: VAE2
notebook_dir: VAE
notebook_src: 02-VAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE2.000
latent_dim: default
loss_weights: default
scale: 0.01
seed: default
batch_size: default
epochs: default
fit_verbosity: 2
Nb_VAE3:
notebook_id: VAE3
notebook_dir: VAE
notebook_src: 03-VAE-with-MNIST-post.ipynb
notebook_tag: default
overrides:
run_dir: ./run/VAE2.000
scale: default
seed: default
Nb_VAE5:
notebook_id: VAE5
notebook_dir: VAE
notebook_src: 05-About-CelebA.ipynb
notebook_tag: default
overrides:
run_dir: default
progress_verbosity: 2
Nb_VAE6:
notebook_id: VAE6
notebook_dir: VAE
notebook_src: 06-Prepare-CelebA-datasets.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 0.01
seed: default
cluster_size: default
image_size: default
output_dir: ./data
exit_if_exist: False
progress_verbosity: 2
Nb_VAE7:
notebook_id: VAE7
notebook_dir: VAE
notebook_src: 07-Check-CelebA.ipynb
notebook_tag: default
overrides:
run_dir: default
image_size: default
enhanced_dir: ./data
progress_verbosity: 2
Nb_VAE8:
notebook_id: VAE8
notebook_dir: VAE
notebook_src: 08-VAE-with-CelebA.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 0.1
image_size: default
enhanced_dir: ./data
latent_dim: default
loss_weights: default
batch_size: default
epochs: default
progress_verbosity: 2
Nb_VAE9:
notebook_id: VAE9
notebook_dir: VAE
notebook_src: 09-VAE-with-CelebA-post.ipynb
notebook_tag: default
overrides:
run_dir: default
image_size: default
enhanced_dir: ./data
# ------ DCGAN -----------------------------------------------------
#
Nb_SHEEP1:
notebook_id: SHEEP1
notebook_dir: DCGAN
notebook_src: 01-DCGAN-Draw-me-a-sheep.ipynb
notebook_tag: default
overrides:
scale: 0.005
run_dir: default
latent_dim: default
epochs: 5
batch_size: default
num_img: default
fit_verbosity: 2
#
# ------ Misc ------------------------------------------------------
#
Nb_ACTF1:
notebook_id: ACTF1
notebook_dir: Misc
notebook_src: Activation-Functions.ipynb
notebook_tag: default
Nb_NP1:
notebook_id: NP1
notebook_dir: Misc
notebook_src: Numpy.ipynb
notebook_tag: default
_metadata_:
version: '1.0'
description: Full run on a smart gpu
output_tag: ==done==
output_ipynb: ./fidle/run/done/ipynb
output_html: ./fidle/run/done/html
report_json: ./fidle/run/done/report.json
report_error: ./fidle/run/done/error.txt
environment_vars:
FIDLE_SAVE_FIGS: true
TF_CPP_MIN_LOG_LEVEL: 2
#
# ------ LinearReg -------------------------------------------------
#
Nb_LINR1:
notebook_id: LINR1
notebook_dir: LinearReg
notebook_src: 01-Linear-Regression.ipynb
notebook_tag: default
Nb_GRAD1:
notebook_id: GRAD1
notebook_dir: LinearReg
notebook_src: 02-Gradient-descent.ipynb
notebook_tag: default
Nb_POLR1:
notebook_id: POLR1
notebook_dir: LinearReg
notebook_src: 03-Polynomial-Regression.ipynb
notebook_tag: default
Nb_LOGR1:
notebook_id: LOGR1
notebook_dir: LinearReg
notebook_src: 04-Logistic-Regression.ipynb
notebook_tag: default
Nb_PER57:
notebook_id: PER57
notebook_dir: IRIS
notebook_src: 01-Simple-Perceptron.ipynb
notebook_tag: default
#
# ------ BHPD ------------------------------------------------------
#
Nb_BHPD1:
notebook_id: BHPD1
notebook_dir: BHPD
notebook_src: 01-DNN-Regression.ipynb
notebook_tag: default
overrides:
fit_verbosity: 2
Nb_BHPD2:
notebook_id: BHPD2
notebook_dir: BHPD
notebook_src: 02-DNN-Regression-Premium.ipynb
notebook_tag: default
overrides:
fit_verbosity: 2
#
# ------ MNIST -----------------------------------------------------
#
Nb_MNIST1:
notebook_id: MNIST1
notebook_dir: MNIST
notebook_src: 01-DNN-MNIST.ipynb
notebook_tag: default
overrides:
fit_verbosity: 2
Nb_MNIST2:
notebook_id: MNIST2
notebook_dir: MNIST
notebook_src: 02-CNN-MNIST.ipynb
notebook_tag: default
overrides:
fit_verbosity: 2
#
# ------ GTSRB -----------------------------------------------------
#
Nb_GTSRB1:
notebook_id: GTSRB1
notebook_dir: GTSRB
notebook_src: 01-Preparation-of-data.ipynb
notebook_tag: default
overrides:
scale: 0.01
output_dir: ./data
progress_verbosity: 2
Nb_GTSRB2:
notebook_id: GTSRB2
notebook_dir: GTSRB
notebook_src: 02-First-convolutions.ipynb
notebook_tag: default
overrides:
run_dir: default
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
fit_verbosity: 2
Nb_GTSRB3:
notebook_id: GTSRB3
notebook_dir: GTSRB
notebook_src: 03-Tracking-and-visualizing.ipynb
notebook_tag: default
overrides:
run_dir: default
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
fit_verbosity: 2
Nb_GTSRB4:
notebook_id: GTSRB4
notebook_dir: GTSRB
notebook_src: 04-Data-augmentation.ipynb
notebook_tag: default
overrides:
run_dir: default
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
dataset_name: set-24x24-L
batch_size: 64
epochs: 5
scale: 1
fit_verbosity: 2
Nb_GTSRB5_r1:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =1==done==
overrides:
run_dir: default
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
datasets: "['set-24x24-L', 'set-24x24-RGB', 'set-48x48-L', 'set-48x48-RGB', 'set-24x24-L-LHE', 'set-24x24-RGB-HE', 'set-48x48-L-LHE', 'set-48x48-RGB-HE']"
models: "{'v1':'get_model_v1', 'v2':'get_model_v2', 'v3':'get_model_v3'}"
batch_size: 64
epochs: 16
scale: 1
with_datagen: False
fit_verbosity: 0
Nb_GTSRB5_r2:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =2==done==
overrides:
run_dir: default
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
datasets: "['set-24x24-L', 'set-24x24-RGB', 'set-48x48-L', 'set-48x48-RGB', 'set-24x24-L-LHE', 'set-24x24-RGB-HE', 'set-48x48-L-LHE', 'set-48x48-RGB-HE']"
models: "{'v1':'get_model_v1', 'v2':'get_model_v2', 'v3':'get_model_v3'}"
batch_size: 64
epochs: 16
scale: 1
with_datagen: False
fit_verbosity: 0
Nb_GTSRB5_r3:
notebook_id: GTSRB5
notebook_dir: GTSRB
notebook_src: 05-Full-convolutions.ipynb
notebook_tag: =3==done==
overrides:
run_dir: default
enhanced_dir: '{datasets_dir}/GTSRB/enhanced'
datasets: "['set-48x48-L', 'set-48x48-RGB']"
models: "{'v2':'get_model_v2', 'v3':'get_model_v3'}"
batch_size: 64
epochs: 16
scale: 1
with_datagen: True
fit_verbosity: 0
Nb_GTSRB6:
notebook_id: GTSRB6
notebook_dir: GTSRB
notebook_src: 06-Notebook-as-a-batch.ipynb
notebook_tag: default
Nb_GTSRB7:
notebook_id: GTSRB7
notebook_dir: GTSRB
notebook_src: 07-Show-report.ipynb
notebook_tag: default
overrides:
run_dir: default
report_dir: ./run/GTSRB5
#
# ------ IMDB ------------------------------------------------------
#
Nb_IMDB1:
notebook_id: IMDB1
notebook_dir: IMDB
notebook_src: 01-One-hot-encoding.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
hide_most_frequently: default
batch_size: default
epochs: default
fit_verbosity: 2
Nb_IMDB2:
notebook_id: IMDB2
notebook_dir: IMDB
notebook_src: 02-Keras-embedding.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
hide_most_frequently: default
review_len: default
dense_vector_size: default
batch_size: default
epochs: default
output_dir: default
Nb_IMDB3:
notebook_id: IMDB3
notebook_dir: IMDB
notebook_src: 03-Prediction.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
review_len: default
dictionaries_dir: default
Nb_IMDB4:
notebook_id: IMDB4
notebook_dir: IMDB
notebook_src: 04-Show-vectors.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
review_len: default
dictionaries_dir: default
Nb_IMDB5:
notebook_id: IMDB5
notebook_dir: IMDB
notebook_src: 05-LSTM-Keras.ipynb
notebook_tag: default
overrides:
run_dir: default
vocab_size: default
hide_most_frequently: default
review_len: default
dense_vector_size: default
batch_size: default
epochs: default
fit_verbosity: 2
scale: .5
#
# ------ SYNOP -----------------------------------------------------
#
Nb_LADYB1:
notebook_id: LADYB1
notebook_dir: SYNOP
notebook_src: LADYB1-Ladybug.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 1
train_prop: default
sequence_len: default
predict_len: default
batch_size: default
epochs: default
fit_verbosity: 2
Nb_SYNOP1:
notebook_id: SYNOP1
notebook_dir: SYNOP
notebook_src: SYNOP1-Preparation-of-data.ipynb
notebook_tag: default
overrides:
output_dir: default
Nb_SYNOP2:
notebook_id: SYNOP2
notebook_dir: SYNOP
notebook_src: SYNOP2-First-predictions.ipynb
notebook_tag: default
overrides:
scale: 1
train_prop: default
sequence_len: default
batch_size: default
epochs: default
Nb_SYNOP3:
notebook_id: SYNOP3
notebook_dir: SYNOP
notebook_src: SYNOP3-12h-predictions.ipynb
notebook_tag: default
overrides:
iterations: default
scale: default
train_prop: default
sequence_len: default
#
# ------ AE --------------------------------------------------------
#
Nb_AE1:
notebook_id: AE1
notebook_dir: AE
notebook_src: 01-Prepare-MNIST-dataset.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 1
prepared_dataset: default
progress_verbosity: 2
Nb_AE2:
notebook_id: AE2
notebook_dir: AE
notebook_src: 02-AE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
prepared_dataset: default
dataset_seed: default
scale: 1
latent_dim: default
train_prop: default
batch_size: default
epochs: default
Nb_AE3:
notebook_id: AE3
notebook_dir: AE
notebook_src: 03-AE-with-MNIST-post.ipynb
notebook_tag: default
overrides:
run_dir: ./run/AE2
prepared_dataset: default
dataset_seed: default
scale: 1
train_prop: default
Nb_AE4:
notebook_id: AE4
notebook_dir: AE
notebook_src: 04-ExtAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
prepared_dataset: default
dataset_seed: default
scale: 1
latent_dim: default
train_prop: default
batch_size: default
epochs: default
Nb_AE5:
notebook_id: AE5
notebook_dir: AE
notebook_src: 05-ExtAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
prepared_dataset: default
dataset_seed: default
scale: 1
latent_dim: default
train_prop: default
batch_size: default
epochs: default
#
# ------ VAE -------------------------------------------------------
#
Nb_VAE1:
notebook_id: VAE1
notebook_dir: VAE
notebook_src: 01-VAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
latent_dim: 2
loss_weights: default
scale: 1
seed: default
batch_size: default
epochs: default
fit_verbosity: 2
Nb_VAE2:
notebook_id: VAE2
notebook_dir: VAE
notebook_src: 02-VAE-with-MNIST.ipynb
notebook_tag: default
overrides:
run_dir: default
latent_dim: 2
loss_weights: default
scale: 1
seed: default
batch_size: default
epochs: default
fit_verbosity: 2
Nb_VAE3:
notebook_id: VAE3
notebook_dir: VAE
notebook_src: 03-VAE-with-MNIST-post.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 1
seed: default
Nb_VAE5:
notebook_id: VAE5
notebook_dir: VAE
notebook_src: 05-About-CelebA.ipynb
notebook_tag: default
overrides:
run_dir: default
progress_verbosity: 2
Nb_VAE6:
notebook_id: VAE6
notebook_dir: VAE
notebook_src: 06-Prepare-CelebA-datasets.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 0.05
seed: default
cluster_size: default
image_size: default
output_dir: ./data
exit_if_exist: False
progress_verbosity: 2
Nb_VAE7:
notebook_id: VAE7
notebook_dir: VAE
notebook_src: 07-Check-CelebA.ipynb
notebook_tag: default
overrides:
run_dir: default
image_size: default
enhanced_dir: ./data
progress_verbosity: 2
Nb_VAE8:
notebook_id: VAE8
notebook_dir: VAE
notebook_src: 08-VAE-with-CelebA.ipynb
notebook_tag: default
overrides:
run_dir: default
scale: 1
image_size: '(192,160)'
enhanced_dir: '{datasets_dir}/celeba/enhanced'
latent_dim: 300
loss_weights: default
batch_size: 64
epochs: 15
progress_verbosity: 2
Nb_VAE9_r1:
notebook_id: VAE9
notebook_dir: VAE
notebook_src: 09-VAE-with-CelebA-192x160.ipynb
notebook_tag: =1==done==
overrides:
run_dir: ./run/VAE9_r1
scale: 1
image_size: '(192,160)'
enhanced_dir: '{datasets_dir}/celeba/enhanced'
latent_dim: 100
loss_weights: '[.7,.3]'
batch_size: 64
epochs: 5
progress_verbosity: 2
Nb_VAE9_r2:
notebook_id: VAE9
notebook_dir: VAE
notebook_src: 09-VAE-with-CelebA-192x160.ipynb
notebook_tag: =2==done==
overrides:
run_dir: ./run/VAE9_r2
scale: 1
image_size: '(192,160)'
enhanced_dir: '{datasets_dir}/celeba/enhanced'
latent_dim: 100
loss_weights: '[.5,.5]'
batch_size: 64
epochs: 5
progress_verbosity: 2
Nb_VAE9_r3:
notebook_id: VAE9
notebook_dir: VAE
notebook_src: 09-VAE-with-CelebA-192x160.ipynb
notebook_tag: =3==done==
overrides:
run_dir: ./run/VAE9_r3
scale: 1
image_size: '(192,160)'
enhanced_dir: '{datasets_dir}/celeba/enhanced'
latent_dim: 100
loss_weights: '[.3,.7]'
batch_size: 64
epochs: 5
progress_verbosity: 2
Nb_VAE10:
notebook_id: VAE10
notebook_dir: VAE
notebook_src: 10-VAE-with-CelebA-post.ipynb
notebook_tag: default
overrides:
run_dir: default
image_size: '(192,160)'
enhanced_dir: '{datasets_dir}/celeba/enhanced'
# ------ DCGAN -----------------------------------------------------
#
Nb_SHEEP1:
notebook_id: SHEEP1
notebook_dir: DCGAN
notebook_src: 01-DCGAN-Draw-me-a-sheep.ipynb
notebook_tag: default
overrides:
scale: 1
run_dir: ./run/SHEEP1
latent_dim: default
epochs: 10
batch_size: 32
num_img: 12
fit_verbosity: 2
Nb_SHEEP2:
notebook_id: SHEEP2
notebook_dir: DCGAN
notebook_src: 02-WGANGP-Draw-me-a-sheep.ipynb
notebook_tag: default
overrides:
scale: 1
run_dir: ./run/SHEEP2
latent_dim: 80
epochs: 3
batch_size: 64
num_img: 12
fit_verbosity: 2
#
# ------ Misc ------------------------------------------------------
#
Nb_ACTF1:
notebook_id: ACTF1
notebook_dir: Misc
notebook_src: Activation-Functions.ipynb
notebook_tag: default
Nb_NP1:
notebook_id: NP1
notebook_dir: Misc
notebook_src: Numpy.ipynb
notebook_tag: default
campain:
version: '1.0'
description: Automatically generated ci profile (13/10/22 18:40:08)
description: Automatically generated ci profile (16/10/22 21:33:51)
directory: ./campains/default
existing_notebook: 'remove # remove|skip'
report_template: 'fidle # fidle|default'
......@@ -35,6 +35,11 @@ BHPD2:
notebook: BHPD/02-DNN-Regression-Premium.ipynb
overrides:
fit_verbosity: default
WINE1:
notebook: BHPD/03-DNN-Wine-Regression.ipynb
overrides:
fit_verbosity: default
dataset_name: default
#
# ------------ MNIST
......@@ -361,3 +366,5 @@ SCRATCH1:
TSB1:
notebook: Misc/Using-Tensorboard.ipynb
overrides: ??
PANDAS1:
notebook: Misc/Using-pandas.ipynb
......@@ -15,10 +15,13 @@ campain:
#
LINR1:
notebook: LinearReg/01-Linear-Regression.ipynb
GRAD1:
notebook: LinearReg/02-Gradient-descent.ipynb
POLR1:
notebook: LinearReg/03-Polynomial-Regression.ipynb
LOGR1:
notebook: LinearReg/04-Logistic-Regression.ipynb
......@@ -35,11 +38,17 @@ BHPD1:
notebook: BHPD/01-DNN-Regression.ipynb
overrides:
fit_verbosity: 2
BHPD2:
notebook: BHPD/02-DNN-Regression-Premium.ipynb
overrides:
fit_verbosity: 2
WINE1:
notebook: BHPD/03-DNN-Wine-Regression.ipynb
overrides:
fit_verbosity: 2
dataset_name: default
#
# ------------ MNIST
#
......
......@@ -15,10 +15,13 @@ campain:
#
LINR1:
notebook: LinearReg/01-Linear-Regression.ipynb
GRAD1:
notebook: LinearReg/02-Gradient-descent.ipynb
POLR1:
notebook: LinearReg/03-Polynomial-Regression.ipynb
LOGR1:
notebook: LinearReg/04-Logistic-Regression.ipynb
......@@ -35,11 +38,24 @@ BHPD1:
notebook: BHPD/01-DNN-Regression.ipynb
overrides:
fit_verbosity: 2
BHPD2:
notebook: BHPD/02-DNN-Regression-Premium.ipynb
overrides:
fit_verbosity: 2
WINE1.1:
notebook: BHPD/03-DNN-Wine-Regression.ipynb
overrides:
fit_verbosity: 2
dataset_name: winequality-red.csv
WINE1.2:
notebook: BHPD/03-DNN-Wine-Regression.ipynb
overrides:
fit_verbosity: 2
dataset_name: winequality-red.csv
#
# ------------ MNIST
#
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment