Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • daconcea/fidle
  • bossardl/fidle
  • Julie.Remenant/fidle
  • abijolao/fidle
  • monsimau/fidle
  • karkars/fidle
  • guilgautier/fidle
  • cailletr/fidle
  • talks/fidle
9 results
Show changes
Showing
with 3535 additions and 0 deletions
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [K3VAE1] - First VAE, using functional API (MNIST dataset)
<!-- DESC --> Construction and training of a VAE, using functional APPI, with a latent space of small dimension.
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- Understanding and implementing a **variational autoencoder** neurals network (VAE)
- Understanding **Keras functional API**, using two custom layers
The calculation needs being important, it is preferable to use a very simple dataset such as MNIST to start with.
...MNIST with a small scale if you haven't a GPU ;-)
## What we're going to do :
- Defining a VAE model
- Build the model
- Train it
- Have a look on the train process
## Acknowledgements :
Thanks to **François Chollet** who is at the base of this example (and the creator of Keras !!).
See : https://keras.io/examples/generative/vae
%% Cell type:markdown id: tags:
## Step 1 - Init python stuff
%% Cell type:code id: tags:
``` python
import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
from keras import layers
import numpy as np
from modules.layers import SamplingLayer, VariationalLossLayer
from modules.callbacks import ImagesCallback
from modules.datagen import MNIST
import sys
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3VAE1')
```
%% Cell type:markdown id: tags:
## Step 2 - Parameters
`scale` : With scale=1, we need 1'30s on a GPU V100 ...and >20' on a CPU !\
`latent_dim` : 2 dimensions is small, but usefull to draw !\
`fit_verbosity`: Verbosity of training progress bar: 0=silent, 1=progress bar, 2=One line
`loss_weights` : Our **loss function** is the weighted sum of two loss:
- `r_loss` which measures the loss during reconstruction.
- `kl_loss` which measures the dispersion.
The weights are defined by: `loss_weights=[k1,k2]` where : `total_loss = k1*r_loss + k2*kl_loss`
In practice, a value of \[1,.06\] gives good results here.
With scale=0.2, epochs=10 : 3'30 on a laptop
%% Cell type:code id: tags:
``` python
latent_dim = 2
loss_weights = [1,.06]
scale = 0.2
seed = 123
batch_size = 64
epochs = 10
fit_verbosity = 1
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('latent_dim', 'loss_weights', 'scale', 'seed', 'batch_size', 'epochs', 'fit_verbosity')
```
%% Cell type:markdown id: tags:
## Step 3 - Prepare data
`MNIST.get_data()` return : `x_train,y_train, x_test,y_test`, \
but we only need x_train for our training.
%% Cell type:code id: tags:
``` python
x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 )
fidle.scrawler.images(x_data[:20], None, indices='all', columns=10, x_size=1,y_size=1,y_padding=0, save_as='01-original')
```
%% Cell type:markdown id: tags:
## Step 4 - Build model
In this example, we will use the **functional API.**
For this, we will use two custom layers :
- `SamplingLayer`, which generates a vector z from the parameters z_mean and z_log_var - See : [SamplingLayer.py](./modules/layers/SamplingLayer.py)
- `VariationalLossLayer`, which allows us to calculate the loss function, loss - See : [VariationalLossLayer.py](./modules/layers/VariationalLossLayer.py)
%% Cell type:markdown id: tags:
#### Encoder
%% Cell type:code id: tags:
``` python
inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(32, 3, strides=1, padding="same", activation="relu")(inputs)
x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(64, 3, strides=1, padding="same", activation="relu")(x)
x = layers.Flatten()(x)
x = layers.Dense(16, activation="relu")(x)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = SamplingLayer()([z_mean, z_log_var])
encoder = keras.Model(inputs, [z_mean, z_log_var, z], name="encoder")
# encoder.summary()
```
%% Cell type:markdown id: tags:
#### Decoder
%% Cell type:code id: tags:
``` python
inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(7 * 7 * 64, activation="relu")(inputs)
x = layers.Reshape((7, 7, 64))(x)
x = layers.Conv2DTranspose(64, 3, strides=1, padding="same", activation="relu")(x)
x = layers.Conv2DTranspose(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, strides=2, padding="same", activation="relu")(x)
outputs = layers.Conv2DTranspose(1, 3, padding="same", activation="sigmoid")(x)
decoder = keras.Model(inputs, outputs, name="decoder")
# decoder.summary()
```
%% Cell type:markdown id: tags:
#### VAE
We will calculate the loss with a specific layer: `VariationalLossLayer`
See our : modules.layers.[VariationalLossLayer.py](./modules/layers/VariationalLossLayer.py)
%% Cell type:code id: tags:
``` python
inputs = keras.Input(shape=(28, 28, 1))
z_mean, z_log_var, z = encoder(inputs)
outputs = decoder(z)
outputs = VariationalLossLayer(loss_weights=loss_weights)([inputs, z_mean, z_log_var, outputs])
vae=keras.Model(inputs,outputs)
vae.compile(optimizer='adam', loss=None)
```
%% Cell type:markdown id: tags:
## Step 5 - Train
### 5.1 - Using two nice custom callbacks :-)
Two custom callbacks are used:
- `ImagesCallback` : qui va sauvegarder des images durant l'apprentissage - See [ImagesCallback.py](./modules/callbacks/ImagesCallback.py)
- `BestModelCallback` : qui sauvegardera le meilleur model - See [BestModelCallback.py](./modules/callbacks/BestModelCallback.py)
%% Cell type:code id: tags:
``` python
callback_images = ImagesCallback(x=x_data, z_dim=latent_dim, nb_images=5, from_z=True, from_random=True, run_dir=run_dir)
callbacks_list = [callback_images]
```
%% Cell type:markdown id: tags:
### 5.2 - Let's train !
With `scale=1`, need 1'15 on a GPU (V100 at IDRIS) ...or 20' on a CPU
%% Cell type:code id: tags:
``` python
chrono=fidle.Chrono()
chrono.start()
history = vae.fit(x_data, epochs=epochs, batch_size=batch_size, callbacks=callbacks_list, verbose=fit_verbosity)
chrono.show()
```
%% Cell type:markdown id: tags:
## Step 6 - Training review
### 6.1 - History
%% Cell type:code id: tags:
``` python
fidle.scrawler.history(history, plot={"Loss":['loss']}, save_as='history')
```
%% Cell type:markdown id: tags:
### 6.2 - Reconstruction during training
At the end of each epoch, our callback saved some reconstructed images.
Where :
Original image -> encoder -> z -> decoder -> Reconstructed image
%% Cell type:code id: tags:
``` python
images_z, images_r = callback_images.get_images( range(0,epochs,2) )
fidle.utils.subtitle('Original images :')
fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as=None)
fidle.utils.subtitle('Encoded/decoded images')
fidle.scrawler.images(images_z, None, indices='all', columns=5, x_size=2,y_size=2, save_as='02-reconstruct')
fidle.utils.subtitle('Original images :')
fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as=None)
```
%% Cell type:markdown id: tags:
### 6.3 - Generation (latent -> decoder)
%% Cell type:code id: tags:
``` python
fidle.utils.subtitle('Generated images from latent space')
fidle.scrawler.images(images_r, None, indices='all', columns=5, x_size=2,y_size=2, save_as='03-generated')
```
%% Cell type:markdown id: tags:
## Annexe - Model Save and reload
Save our model
%% Cell type:code id: tags:
``` python
os.makedirs(f'{run_dir}/models', exist_ok=True)
filename = run_dir+'/models/my_model.keras'
vae.save(filename)
```
%% Cell type:markdown id: tags:
Reload it
%% Cell type:code id: tags:
``` python
vae_reloaded = keras.models.load_model( filename,
custom_objects={ 'SamplingLayer': SamplingLayer,
'VariationalLossLayer':VariationalLossLayer})
```
%% Cell type:markdown id: tags:
Play with our decoder !
%% Cell type:code id: tags:
``` python
decoder = vae.get_layer('decoder')
img = decoder( np.array([[-1,.1]]))
fidle.scrawler.images(img.detach().cpu().numpy(), x_size=2,y_size=2, save_as='04-example')
```
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [K3VAE2] - VAE, using a custom model class (MNIST dataset)
<!-- DESC --> Construction and training of a VAE, using model subclass, with a latent space of small dimension.
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- Understanding and implementing a **variational autoencoder** neurals network (VAE)
- Understanding a still more **advanced programming model**, using a **custom model**
The calculation needs being important, it is preferable to use a very simple dataset such as MNIST to start with.
...MNIST with a small scale if you haven't a GPU ;-)
## What we're going to do :
- Defining a VAE model
- Build the model
- Train it
- Have a look on the train process
## Acknowledgements :
Thanks to **François Chollet** who is at the base of this example (and the creator of Keras !!).
See : https://keras.io/examples/generative/vae
%% Cell type:markdown id: tags:
## Step 1 - Init python stuff
%% Cell type:code id: tags:
``` python
import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
from keras import layers
import numpy as np
from modules.models import VAE
from modules.layers import SamplingLayer
from modules.callbacks import ImagesCallback
from modules.datagen import MNIST
import matplotlib.pyplot as plt
import scipy.stats
import sys
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3VAE2')
VAE.about()
```
%% Cell type:markdown id: tags:
## Step 2 - Parameters
`scale` : with scale=1, we need 1'30s on a GPU V100 ...and >20' on a CPU !
`latent_dim` : 2 dimensions is small, but usefull to draw !
`fit_verbosity`: Verbosity of training progress bar: 0=silent, 1=progress bar, 2=One line
`loss_weights` : Our **loss function** is the weighted sum of two loss:
- `r_loss` which measures the loss during reconstruction.
- `kl_loss` which measures the dispersion.
The weights are defined by: `loss_weights=[k1,k2]` where : `total_loss = k1*r_loss + k2*kl_loss`
In practice, a value of \[1,.06\] gives good results here.
%% Cell type:code id: tags:
``` python
latent_dim = 6
loss_weights = [1,.06]
scale = .2
seed = 123
batch_size = 64
epochs = 4
fit_verbosity = 1
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('latent_dim', 'loss_weights', 'scale', 'seed', 'batch_size', 'epochs', 'fit_verbosity')
```
%% Cell type:markdown id: tags:
## Step 3 - Prepare data
`MNIST.get_data()` return : `x_train,y_train, x_test,y_test`, \
but we only need x_train for our training.
%% Cell type:code id: tags:
``` python
x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 )
fidle.scrawler.images(x_data[:20], None, indices='all', columns=10, x_size=1,y_size=1,y_padding=0, save_as='01-original')
```
%% Cell type:markdown id: tags:
## Step 4 - Build model
In this example, we will use a **custom model**.
For this, we will use :
- `SamplingLayer`, which generates a vector z from the parameters z_mean and z_log_var - See : [SamplingLayer.py](./modules/layers/SamplingLayer.py)
- `VAE`, a custom model with a specific train_step - See : [VAE.py](./modules/models/VAE.py)
%% Cell type:markdown id: tags:
#### Encoder
%% Cell type:code id: tags:
``` python
inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(32, 3, strides=1, padding="same", activation="relu")(inputs)
x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(64, 3, strides=1, padding="same", activation="relu")(x)
x = layers.Flatten()(x)
x = layers.Dense(16, activation="relu")(x)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = SamplingLayer()([z_mean, z_log_var])
encoder = keras.Model(inputs, [z_mean, z_log_var, z], name="encoder")
encoder.compile()
```
%% Cell type:markdown id: tags:
#### Decoder
%% Cell type:code id: tags:
``` python
inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(7 * 7 * 64, activation="relu")(inputs)
x = layers.Reshape((7, 7, 64))(x)
x = layers.Conv2DTranspose(64, 3, strides=1, padding="same", activation="relu")(x)
x = layers.Conv2DTranspose(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, strides=2, padding="same", activation="relu")(x)
outputs = layers.Conv2DTranspose(1, 3, padding="same", activation="sigmoid")(x)
decoder = keras.Model(inputs, outputs, name="decoder")
decoder.compile()
```
%% Cell type:markdown id: tags:
#### VAE
`VAE` is a custom model with a specific train_step - See : [VAE.py](./modules/models/VAE.py)
%% Cell type:code id: tags:
``` python
vae = VAE(encoder, decoder, loss_weights)
vae.compile(optimizer='adam')
```
%% Cell type:markdown id: tags:
## Step 5 - Train
### 5.1 - Using two nice custom callbacks :-)
Two custom callbacks are used:
- `ImagesCallback` : qui va sauvegarder des images durant l'apprentissage - See [ImagesCallback.py](./modules/callbacks/ImagesCallback.py)
- `BestModelCallback` : qui sauvegardera le meilleur model - See [BestModelCallback.py](./modules/callbacks/BestModelCallback.py)
%% Cell type:code id: tags:
``` python
callback_images = ImagesCallback(x=x_data, z_dim=latent_dim, nb_images=5, from_z=True, from_random=True, run_dir=run_dir)
callbacks_list = [callback_images]
```
%% Cell type:markdown id: tags:
### 5.2 - Let's train !
With `scale=1`, need 1'15 on a GPU (V100 at IDRIS) ...or 20' on a CPU
%% Cell type:code id: tags:
``` python
chrono=fidle.Chrono()
chrono.start()
history = vae.fit(x_data, epochs=epochs, batch_size=batch_size, callbacks=callbacks_list, verbose=fit_verbosity)
chrono.show()
```
%% Cell type:markdown id: tags:
## Step 6 - Training review
### 6.1 - History
%% Cell type:code id: tags:
``` python
fidle.scrawler.history(history, plot={"Loss":['loss']}, save_as='history')
```
%% Cell type:markdown id: tags:
### 6.2 - Reconstruction during training
At the end of each epoch, our callback saved some reconstructed images.
Where :
Original image -> encoder -> z -> decoder -> Reconstructed image
%% Cell type:code id: tags:
``` python
images_z, images_r = callback_images.get_images( range(0,epochs,2) )
fidle.utils.subtitle('Original images :')
fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as='02-original')
fidle.utils.subtitle('Encoded/decoded images')
fidle.scrawler.images(images_z, None, indices='all', columns=5, x_size=2,y_size=2, save_as='03-reconstruct')
fidle.utils.subtitle('Original images :')
fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as=None)
```
%% Cell type:markdown id: tags:
### 6.3 - Generation (latent -> decoder) during training
%% Cell type:code id: tags:
``` python
fidle.utils.subtitle('Generated images from latent space')
fidle.scrawler.images(images_r, None, indices='all', columns=5, x_size=2,y_size=2, save_as='04-encoded')
```
%% Cell type:markdown id: tags:
### 6.4 - Save model
%% Cell type:code id: tags:
``` python
os.makedirs(f'{run_dir}/models', exist_ok=True)
vae.save(f'{run_dir}/models/vae_model.keras')
```
%% Cell type:markdown id: tags:
## Step 7 - Model evaluation
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
vae=VAE()
vae.reload(f'{run_dir}/models/vae_model.keras')
```
%% Cell type:markdown id: tags:
### 7.2 - Image reconstruction
%% Cell type:code id: tags:
``` python
# ---- Select few images
x_show = fidle.utils.pick_dataset(x_data, n=10)
# ---- Get latent points and reconstructed images
z_mean, z_var, z = vae.encoder.predict(x_show)
x_reconst = vae.decoder.predict(z)
# ---- Show it
labels=[ str(np.round(z[i],1)) for i in range(10) ]
fidle.scrawler.images(x_show, None, indices='all', columns=10, x_size=2,y_size=2, save_as='05-original')
fidle.scrawler.images(x_reconst, None, indices='all', columns=10, x_size=2,y_size=2, save_as='06-reconstruct')
```
%% Cell type:markdown id: tags:
### 7.3 - Visualization of the latent space
%% Cell type:code id: tags:
``` python
n_show = int(20000*scale)
# ---- Select images
x_show, y_show = fidle.utils.pick_dataset(x_data,y_data, n=n_show)
# ---- Get latent points
z_mean, z_var, z = vae.encoder.predict(x_show)
# ---- Show them
fig = plt.figure(figsize=(14, 10))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=30)
plt.colorbar()
fidle.scrawler.save_fig('07-Latent-space')
plt.show()
```
%% Cell type:markdown id: tags:
### 7.4 - Generative latent space
%% Cell type:code id: tags:
``` python
if latent_dim>2:
print('Sorry, This part can only work if the latent space is of dimension 2')
else:
grid_size = 18
grid_scale = 1
# ---- Draw a ppf grid
grid=[]
for y in scipy.stats.norm.ppf(np.linspace(0.99, 0.01, grid_size),scale=grid_scale):
for x in scipy.stats.norm.ppf(np.linspace(0.01, 0.99, grid_size),scale=grid_scale):
grid.append( (x,y) )
grid=np.array(grid)
# ---- Draw latentspoints and grid
fig = plt.figure(figsize=(10, 8))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=20)
plt.scatter(grid[:, 0] , grid[:, 1], c = 'black', s=60, linewidth=2, marker='+', alpha=1)
fidle.scrawler.save_fig('08-Latent-grid')
plt.show()
# ---- Plot grid corresponding images
x_reconst = vae.decoder.predict([grid])
fidle.scrawler.images(x_reconst, indices='all', columns=grid_size, x_size=0.5,y_size=0.5, y_padding=0,spines_alpha=0.1, save_as='09-Latent-morphing')
```
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [K3VAE3] - Analysis of the VAE's latent space of MNIST dataset
<!-- DESC --> Visualization and analysis of the VAE's latent space of the dataset MNIST
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- First data generation from **latent space**
- Understanding of underlying principles
- Model management
Here, we don't consume data anymore, but we generate them ! ;-)
## What we're going to do :
- Load a saved model
- Reconstruct some images
- Latent space visualization
- Matrix of generated images
%% Cell type:markdown id: tags:
## Step 1 - Init python stuff
%% Cell type:markdown id: tags:
### 1.1 - Init python
%% Cell type:code id: tags:
``` python
import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
from keras import layers
import numpy as np
from modules.models import VAE
from modules.datagen import MNIST
import matplotlib
import matplotlib.pyplot as plt
from barviz import Simplex
from barviz import Collection
import sys
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3VAE3')
```
%% Cell type:markdown id: tags:
### 1.2 - Parameters
%% Cell type:code id: tags:
``` python
scale = 1
seed = 123
models_dir = './run/K3VAE2'
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('scale', 'seed', 'models_dir')
```
%% Cell type:markdown id: tags:
## Step 2 - Get data
%% Cell type:code id: tags:
``` python
x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 )
```
%% Cell type:markdown id: tags:
## Step 3 - Reload best model
%% Cell type:code id: tags:
``` python
vae=VAE()
vae.reload(f'{models_dir}/models/vae_model')
```
%% Cell type:markdown id: tags:
## Step 4 - Image reconstruction
%% Cell type:code id: tags:
``` python
# ---- Select few images
x_show = fidle.utils.pick_dataset(x_data, n=10)
# ---- Get latent points and reconstructed images
z_mean, z_var, z = vae.encoder.predict(x_show, verbose=0)
x_reconst = vae.decoder.predict(z, verbose=0)
latent_dim = z.shape[1]
# ---- Show it
labels=[ str(np.round(z[i],1)) for i in range(10) ]
fidle.utils.subtitle('Originals :')
fidle.scrawler.images(x_show, None, indices='all', columns=10, x_size=2,y_size=2, save_as='01-original')
fidle.utils.subtitle('Reconstructed :')
fidle.scrawler.images(x_reconst, None, indices='all', columns=10, x_size=2,y_size=2, save_as='02-reconstruct')
```
%% Cell type:markdown id: tags:
## Step 5 - Visualizing the latent space
%% Cell type:code id: tags:
``` python
n_show = min( 20000, len(x_data) )
# ---- Select images
x_show, y_show = fidle.utils.pick_dataset(x_data,y_data, n=n_show)
# ---- Get latent points
z_mean, z_var, z = vae.encoder.predict(x_show, verbose=0)
```
%% Cell type:markdown id: tags:
### 5.1 - Classic 2d visualisaton
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(14, 10))
plt.scatter(z[:, 2] , z[:, 4], c=y_show, cmap= 'tab10', alpha=0.5, s=30)
plt.colorbar()
fidle.scrawler.save_fig('03-Latent-space')
plt.show()
```
%% Cell type:markdown id: tags:
### 5.2 - Simplex visualisaton
%% Cell type:code id: tags:
``` python
if latent_dim<4:
print('Sorry, This part can only work if the latent space is greater than 3')
else:
# ---- Softmax rescale
#
zs = np.exp(z)/np.sum(np.exp(z),axis=1,keepdims=True)
# zc = zs * 1/np.max(zs)
# ---- Create collection
#
c = Collection(zs, colors=y_show, labels=y_show)
c.attrs.markers_colormap = {'colorscale':'Rainbow','cmin':0,'cmax':latent_dim}
c.attrs.markers_size = 5
c.attrs.markers_border_width = 0
c.attrs.markers_opacity = 0.8
s = Simplex.build(latent_dim)
s.attrs.width = 1000
s.attrs.height = 1000
s.plot(c)
```
%% Cell type:markdown id: tags:
## Step 6 - Generate from latent space (latent_dim==2)
%% Cell type:code id: tags:
``` python
if latent_dim>2:
print('Sorry, This part can only work if the latent space is of dimension 2')
else:
grid_size = 14
grid_scale = 1.
# ---- Draw a ppf grid
grid=[]
for y in scipy.stats.norm.ppf(np.linspace(0.99, 0.01, grid_size),scale=grid_scale):
for x in scipy.stats.norm.ppf(np.linspace(0.01, 0.99, grid_size),scale=grid_scale):
grid.append( (x,y) )
grid=np.array(grid)
# ---- Draw latentspoints and grid
fig = plt.figure(figsize=(12, 10))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=20)
plt.scatter(grid[:, 0] , grid[:, 1], c = 'black', s=60, linewidth=2, marker='+', alpha=1)
fidle.scrawler.save_fig('04-Latent-grid')
plt.show()
# ---- Plot grid corresponding images
x_reconst = vae.decoder.predict([grid])
fidle.scrawler.images(x_reconst, indices='all', columns=grid_size, x_size=0.5,y_size=0.5, y_padding=0,spines_alpha=0.1, save_as='05-Latent-morphing')
```
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| ImageCallback
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/SARI/DEVLOG 2020 - S. Arias, E. Maldonado, JL. Parouty
# ------------------------------------------------------------------
# 2.0 version by JL Parouty, feb 2021
from keras.callbacks import Callback
import numpy as np
import matplotlib.pyplot as plt
from skimage import io
import os
class ImagesCallback(Callback):
'''
Save generated (random mode) or encoded/decoded (z mode) images on epoch end.
params:
x : input images, for z mode (None)
z_dim : size of the latent space, for random mode (None)
nb_images : number of images to save
from_z : save images from z (False)
from_random : save images from random (False)
filename : images filename
run_dir : output directory to save images
'''
def __init__(self, x = None,
z_dim = None,
nb_images = 5,
from_z = False,
from_random = False,
filename = 'image-{epoch:03d}-{i:02d}.jpg',
run_dir = './run'):
# ---- Parameters
#
self.x = None if x is None else x[:nb_images]
self.z_dim = z_dim
self.nb_images = nb_images
self.from_z = from_z
self.from_random = from_random
self.filename_z = run_dir + '/images-z/' + filename
self.filename_random = run_dir + '/images-random/' + filename
if from_z: os.makedirs( run_dir + '/images-z/', mode=0o750, exist_ok=True)
if from_random: os.makedirs( run_dir + '/images-random/', mode=0o750, exist_ok=True)
def save_images(self, images, filename, epoch):
'''Save images as <filename>'''
for i,image in enumerate(images):
image = image.squeeze() # Squeeze it if monochrome : (lx,ly,1) -> (lx,ly)
filenamei = filename.format(epoch=epoch,i=i)
if len(image.shape) == 2:
plt.imsave(filenamei, image, cmap='gray_r')
else:
plt.imsave(filenamei, image)
def on_epoch_end(self, epoch, logs={}):
'''Called at the end of each epoch'''
encoder = self.model.get_layer('encoder')
decoder = self.model.get_layer('decoder')
if self.from_random:
z = np.random.normal( size=(self.nb_images,self.z_dim) )
images = decoder.predict(z)
self.save_images(images, self.filename_random, epoch)
if self.from_z:
z_mean, z_var, z = encoder.predict(self.x)
images = decoder.predict(z)
self.save_images(images, self.filename_z, epoch)
def get_images(self, epochs=None, from_z=True,from_random=True):
'''Read and return saved images. epochs is a range'''
if epochs is None : return
images_z = []
images_r = []
for epoch in list(epochs):
for i in range(self.nb_images):
if from_z:
f = self.filename_z.format(epoch=epoch,i=i)
images_z.append( io.imread(f) )
if from_random:
f = self.filename_random.format(epoch=epoch,i=i)
images_r.append( io.imread(f) )
return images_z, images_r
from modules.callbacks.ImagesCallback import ImagesCallback
\ No newline at end of file
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| MNIST Data loader
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/MIAI - https://fidle.cnrs.fr
# ------------------------------------------------------------------
# JL Parouty (Mars 2024)
import h5py
import os
import numpy as np
from hashlib import blake2b
import keras
import keras.datasets.mnist as mnist
# ------------------------------------------------------------------
# A usefull class to manage our MNIST dataset
# This class allows to manage datasets derived from the original MNIST
# ------------------------------------------------------------------
class MNIST():
version = '0.1'
def __init__(self):
pass
@classmethod
def get_data(cls, normalize=True, expand=True, scale=1., train_prop=0.8, shuffle=True, seed=None):
"""
Return original MNIST dataset
args:
normalize : Normalize dataset or not (True)
expand : Reshape images as (28,28,1) instead (28,28) (True)
scale : Scale of dataset to use. 1. mean 100% (1.)
train_prop : Ratio of train/test (0.8)
shuffle : Shuffle data if True (True)
seed : Random seed value. False mean no seed, None mean using /dev/urandom (None)
returns:
x_train,y_train,x_test,y_test
"""
# ---- Seed
#
if seed is not False:
np.random.seed(seed)
print(f'Seeded ({seed})')
# ---- Get data
#
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print('Dataset loaded.')
# ---- Concatenate
#
x_data = np.concatenate([x_train, x_test], axis=0)
y_data = np.concatenate([y_train, y_test])
print('Concatenated.')
# ---- Shuffle
#
if shuffle:
p = np.random.permutation(len(x_data))
x_data, y_data = x_data[p], y_data[p]
print('Shuffled.')
# ---- Rescale
#
n = int(scale*len(x_data))
x_data, y_data = x_data[:n], y_data[:n]
print(f'rescaled ({scale}).')
# ---- Normalization
#
if normalize:
x_data = x_data.astype('float32') / 255.
print('Normalized.')
# ---- Reshape : (28,28) -> (28,28,1)
#
if expand:
x_data = np.expand_dims(x_data, axis=-1)
print('Reshaped.')
# ---- Split
#
n=int(len(x_data)*train_prop)
x_train, x_test = x_data[:n], x_data[n:]
y_train, y_test = y_data[:n], y_data[n:]
print(f'splited ({train_prop}).')
# ---- Hash
#
h = blake2b(digest_size=10)
for a in [x_train,x_test, y_train,y_test]:
h.update(a)
# ---- About and return
#
print('x_train shape is : ', x_train.shape)
print('x_test shape is : ', x_test.shape)
print('y_train shape is : ', y_train.shape)
print('y_test shape is : ', y_test.shape)
print('Blake2b digest is : ', h.hexdigest())
return x_train,y_train, x_test,y_test
from modules.datagen.MNIST import MNIST
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| SamplingLayer
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/MIAI - https://fidle.cnrs.fr
# ------------------------------------------------------------------
# JL Parouty (Mars 2024)
import keras
import torch
from torch.distributions.normal import Normal
# Note : https://keras.io/guides/making_new_layers_and_models_via_subclassing/
class SamplingLayer(keras.layers.Layer):
'''A custom layer that receive (z_mean, z_var) and sample a z vector'''
def call(self, inputs):
z_mean, z_log_var = inputs
batch_size, latent_dim = z_mean.shape
epsilon = Normal(0, 1).sample((batch_size, latent_dim)).to(z_mean.device)
z = z_mean + torch.exp(0.5 * z_log_var) * epsilon
return z
\ No newline at end of file
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| SamplingLayer
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/MIAI - https://fidle.cnrs.fr
# ------------------------------------------------------------------
# JL Parouty (mars 2024)
import keras
import torch
# See : https://keras.io/guides/making_new_layers_and_models_via_subclassing/
class VariationalLossLayer(keras.layers.Layer):
def __init__(self, loss_weights=[3,7]):
super().__init__()
self.k1 = loss_weights[0]
self.k2 = loss_weights[1]
def call(self, inputs):
k1 = self.k1
k2 = self.k2
# ---- Retrieve inputs
#
x, z_mean, z_log_var, y = inputs
# ---- Compute : reconstruction loss
#
r_loss = torch.nn.functional.binary_cross_entropy(y, x, reduction='sum')
#
# ---- Compute : kl_loss
#
kl_loss = - torch.sum(1+ z_log_var - z_mean.pow(2) - z_log_var.exp())
# ---- Compute total loss, and add it
#
loss = r_loss*k1 + kl_loss*k2
self.add_loss(loss)
return y
def get_config(self):
return {'loss_weights':[self.k1,self.k2]}
\ No newline at end of file
from modules.layers.SamplingLayer import SamplingLayer
from modules.layers.VariationalLossLayer import VariationalLossLayer
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| VAE Example
# ------------------------------------------------------------------
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/MIAI - https://fidle.cnrs.fr
# ------------------------------------------------------------------
# JL Parouty (mars 2024
import numpy as np
import keras
import torch
from IPython.display import display,Markdown
from modules.layers import SamplingLayer
import os
# Note : https://keras.io/guides/making_new_layers_and_models_via_subclassing/
class VAE(keras.Model):
'''
A VAE model, built from given encoder and decoder
'''
version = '2.0'
def __init__(self, encoder=None, decoder=None, loss_weights=[1,1], **kwargs):
'''
VAE instantiation with encoder, decoder and r_loss_factor
args :
encoder : Encoder model
decoder : Decoder model
loss_weights : Weight of the loss functions: reconstruction_loss and kl_loss
r_loss_factor : Proportion of reconstruction loss for global loss (0.3)
return:
None
'''
super(VAE, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.loss_weights = loss_weights
print(f'Fidle VAE is ready :-) loss_weights={list(self.loss_weights)}')
def call(self, inputs):
'''
Model forward pass, when we use our model
args:
inputs : Model inputs
return:
output : Output of the model
'''
z_mean, z_log_var, z = self.encoder(inputs)
output = self.decoder(z)
return output
def train_step(self, input):
'''
Implementation of the training update.
Receive an input, compute loss, get gradient, update weights and return metrics.
Here, our metrics are loss.
args:
inputs : Model inputs
return:
loss : Total loss
r_loss : Reconstruction loss
kl_loss : KL loss
'''
# ---- Get the input we need, specified in the .fit()
#
if isinstance(input, tuple):
input = input[0]
k1,k2 = self.loss_weights
# ---- Reset grad
#
self.zero_grad()
# ---- Forward pass
#
# Get encoder outputs
#
z_mean, z_log_var, z = self.encoder(input)
# ---- Get reconstruction from decoder
#
reconstruction = self.decoder(z)
# ---- Compute loss
# Total loss = Reconstruction loss + KL loss
#
r_loss = torch.nn.functional.binary_cross_entropy(reconstruction, input, reduction='sum')
kl_loss = - torch.sum(1+ z_log_var - z_mean.pow(2) - z_log_var.exp())
loss = r_loss*k1 + kl_loss*k2
# ---- Compute gradients for the weights
#
loss.backward()
# ---- Adjust learning weights
#
trainable_weights = [v for v in self.trainable_weights]
gradients = [v.value.grad for v in trainable_weights]
with torch.no_grad():
self.optimizer.apply(gradients, trainable_weights)
# ---- Update metrics (includes the metric that tracks the loss)
#
for metric in self.metrics:
if metric.name == "loss":
metric.update_state(loss)
else:
metric.update_state(input, reconstruction)
# ---- Return a dict mapping metric names to current value
# Note that it will include the loss (tracked in self.metrics).
#
return {m.name: m.result() for m in self.metrics}
# # ---- Forward pass
# # Run the forward pass and record
# # operations on the GradientTape.
# #
# with tf.GradientTape() as tape:
# # ---- Get encoder outputs
# #
# z_mean, z_log_var, z = self.encoder(input)
# # ---- Get reconstruction from decoder
# #
# reconstruction = self.decoder(z)
# # ---- Compute loss
# # Reconstruction loss, KL loss and Total loss
# #
# reconstruction_loss = k1 * tf.reduce_mean( keras.losses.binary_crossentropy(input, reconstruction) )
# kl_loss = 1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var)
# kl_loss = -tf.reduce_mean(kl_loss) * k2
# total_loss = reconstruction_loss + kl_loss
# # ---- Retrieve gradients from gradient_tape
# # and run one step of gradient descent
# # to optimize trainable weights
# #
# grads = tape.gradient(total_loss, self.trainable_weights)
# self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
# return {
# "loss": total_loss,
# "r_loss": reconstruction_loss,
# "kl_loss": kl_loss,
# }
def predict(self,inputs):
'''Our predict function...'''
z_mean, z_var, z = self.encoder.predict(inputs)
outputs = self.decoder.predict(z)
return outputs
def save(self,filename):
'''Save model in 2 part'''
filename, extension = os.path.splitext(filename)
self.encoder.save(f'{filename}-encoder.keras')
self.decoder.save(f'{filename}-decoder.keras')
def reload(self,filename):
'''Reload a 2 part saved model.'''
filename, extension = os.path.splitext(filename)
self.encoder = keras.models.load_model(f'{filename}-encoder.keras', custom_objects={'SamplingLayer': SamplingLayer})
self.decoder = keras.models.load_model(f'{filename}-decoder.keras')
print('Reloaded.')
@classmethod
def about(cls):
'''Basic whoami method'''
display(Markdown('<br>**FIDLE 2024 - VAE**'))
print('Version :', cls.version)
print('Keras version :', keras.__version__)
from modules.models.VAE import VAE
\ No newline at end of file
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [K3WINE1] - Wine quality prediction with a Dense Network (DNN)
<!-- DESC --> Another example of regression, with a wine quality prediction, using Keras 3 and PyTorch
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- Predict the **quality of wines**, based on their analysis
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The **[Wine Quality datasets](https://archive.ics.uci.edu/ml/datasets/wine+Quality)** are made up of analyses of a large number of wines, with an associated quality (between 0 and 10)
This dataset is provide by :
Paulo Cortez, University of Minho, Guimarães, Portugal, http://www3.dsi.uminho.pt/pcortez
A. Cerdeira, F. Almeida, T. Matos and J. Reis, Viticulture Commission of the Vinho Verde Region(CVRVV), Porto, Portugal, @2009
This dataset can be retreive at [University of California Irvine (UCI)](https://archive.ics.uci.edu/dataset/186/wine+quality)
Due to privacy and logistic issues, only physicochemical and sensory variables are available
There is no data about grape types, wine brand, wine selling price, etc.
- fixed acidity
- volatile acidity
- citric acid
- residual sugar
- chlorides
- free sulfur dioxide
- total sulfur dioxide
- density
- pH
- sulphates
- alcohol
- quality (score between 0 and 10)
## What we're going to do :
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
import numpy as np
import pandas as pd
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3WINE1')
```
%% Cell type:markdown id: tags:
Verbosity during training :
- 0 = silent
- 1 = progress bar
- 2 = one line per epoch
%% Cell type:code id: tags:
``` python
fit_verbosity = 1
dataset_name = 'winequality-red.csv'
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('fit_verbosity', 'dataset_name')
```
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
%% Cell type:code id: tags:
``` python
data = pd.read_csv(f'{datasets_dir}/WineQuality/origine/{dataset_name}', header=0,sep=';')
display(data.head(5).style.format("{0:.2f}"))
print('Missing Data : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
```
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
### 3.1 - Split data
We will use 80% of the data for training and 20% for validation.
x will be the data of the analysis and y the quality
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data = data.sample(frac=1., axis=0) # Shuffle
data_train = data.sample(frac=0.8, axis=0) # get 80 %
data_test = data.drop(data_train.index) # test = all - train
# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('quality', axis=1)
y_train = data_train['quality']
x_test = data_test.drop('quality', axis=1)
y_test = data_test['quality']
print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Cell type:markdown id: tags:
### 3.2 - Data normalization
**Note :**
- All input data must be normalized, train and test.
- To do this we will subtract the mean and divide by the standard deviation.
- But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data.
%% Cell type:code id: tags:
``` python
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
# Convert ou DataFrame to numpy array
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
```
%% Cell type:markdown id: tags:
## Step 4 - Build a model
More informations about :
- [Optimizer](https://keras.io/api/optimizers)
- [Activation](https://keras.io/api/layers/activations)
- [Loss](https://keras.io/api/losses)
- [Metrics](https://keras.io/api/metrics)
%% Cell type:code id: tags:
``` python
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop',
loss = 'mse',
metrics = ['mae', 'mse'] )
return model
```
%% Cell type:markdown id: tags:
## 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
model=get_model_v1( (11,) )
model.summary()
```
%% Cell type:markdown id: tags:
### 5.2 - Add callback
%% Cell type:code id: tags:
``` python
os.makedirs('./run/models', mode=0o750, exist_ok=True)
save_dir = "./run/models/best_model.keras"
savemodel_callback = keras.callbacks.ModelCheckpoint( filepath=save_dir, monitor='val_mae', mode='max', save_best_only=True)
```
%% Cell type:markdown id: tags:
### 5.3 - Train it
%% Cell type:code id: tags:
``` python
history = model.fit(x_train,
y_train,
epochs = 100,
batch_size = 10,
verbose = fit_verbosity,
validation_data = (x_test, y_test),
callbacks = [savemodel_callback])
```
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Cell type:markdown id: tags:
### 6.2 - Training history
What was the best result during our training ?
%% Cell type:code id: tags:
``` python
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
```
%% Cell type:code id: tags:
``` python
fidle.scrawler.history( history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']}, save_as='01-history')
```
%% Cell type:markdown id: tags:
## Step 7 - Restore a model :
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
loaded_model = keras.models.load_model('./run/models/best_model.keras')
loaded_model.summary()
print("Loaded.")
```
%% Cell type:markdown id: tags:
### 7.2 - Evaluate it :
%% Cell type:code id: tags:
``` python
score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Cell type:markdown id: tags:
### 7.3 - Make a prediction
%% Cell type:code id: tags:
``` python
# ---- Pick n entries from our test set
n = 200
ii = np.random.randint(1,len(x_test),n)
x_sample = x_test[ii]
y_sample = y_test[ii]
```
%% Cell type:code id: tags:
``` python
# ---- Make a predictions
y_pred = loaded_model.predict( x_sample, verbose=2 )
```
%% Cell type:code id: tags:
``` python
# ---- Show it
print('Wine Prediction Real Delta')
for i in range(n):
pred = y_pred[i][0]
real = y_sample[i]
delta = real-pred
print(f'{i:03d} {pred:.2f} {real} {delta:+.2f} ')
```
%% Cell type:markdown id: tags:
### Few questions :
- Can this model be used for red wines from Bordeaux and/or Beaujolais?
- What are the limitations of this model?
- What are the limitations of this dataset?
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [LWINE1] - Wine quality prediction with a Dense Network (DNN)
<!-- DESC --> Another example of regression, with a wine quality prediction, using PyTorch Lightning
<!-- AUTHOR : Achille Mbogol Touye (EFFILIA-MIAI/SIMaP) -->
## Objectives :
- Predict the **quality of wines**, based on their analysis
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The **[Wine Quality datasets](https://archive.ics.uci.edu/ml/datasets/wine+Quality)** are made up of analyses of a large number of wines, with an associated quality (between 0 and 10)
This dataset is provide by :
Paulo Cortez, University of Minho, Guimarães, Portugal, http://www3.dsi.uminho.pt/pcortez
A. Cerdeira, F. Almeida, T. Matos and J. Reis, Viticulture Commission of the Vinho Verde Region(CVRVV), Porto, Portugal, @2009
This dataset can be retreive at [University of California Irvine (UCI)](https://archive-beta.ics.uci.edu/ml/datasets/wine+quality)
Due to privacy and logistic issues, only physicochemical and sensory variables are available
There is no data about grape types, wine brand, wine selling price, etc.
- fixed acidity
- volatile acidity
- citric acid
- residual sugar
- chlorides
- free sulfur dioxide
- total sulfur dioxide
- density
- pH
- sulphates
- alcohol
- quality (score between 0 and 10)
## What we're going to do :
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
# Import some packages
import os
import sys
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import lightning.pytorch as pl
import torch.nn.functional as F
import torchvision.transforms as T
from importlib import reload
from IPython.display import Markdown
from torch.utils.data import Dataset, DataLoader, random_split
from modules.progressbar import CustomTrainProgressBar
from modules.data_load import WineQualityDataset, Normalize, ToTensor
from lightning.pytorch.loggers.tensorboard import TensorBoardLogger
from torchmetrics.functional.regression import mean_absolute_error, mean_squared_error
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('LWINE1')
```
%% Cell type:markdown id: tags:
Verbosity during training :
- 0 = silent
- 1 = progress bar
- 2 = one line per epoch
%% Cell type:code id: tags:
``` python
fit_verbosity = 1
dataset_name = 'winequality-red.csv'
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('fit_verbosity', 'dataset_name')
```
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
%% Cell type:code id: tags:
``` python
csv_file_path=f'{datasets_dir}/WineQuality/origine/{dataset_name}'
datasets=WineQualityDataset(csv_file_path)
display(datasets.data.head(5).style.format("{0:.2f}"))
print('Missing Data : ',datasets.data.isna().sum().sum(), ' Shape is : ', datasets.data.shape)
```
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
%% Cell type:markdown id: tags:
### 3.1 - Data normalization
**Note :**
- All input features must be normalized.
- To do this we will subtract the mean and divide by the standard deviation for each input features.
- Then we convert numpy array features and target **(quality)** to torch tensor
%% Cell type:code id: tags:
``` python
transforms=T.Compose([Normalize(csv_file_path), ToTensor()])
dataset=WineQualityDataset(csv_file_path,transform=transforms)
```
%% Cell type:code id: tags:
``` python
display(Markdown("before normalization :"))
display(datasets[:]["features"])
print()
display(Markdown("After normalization :"))
display(dataset[:]["features"])
```
%% Cell type:markdown id: tags:
### 3.2 - Split data
We will use 80% of the data for training and 20% for validation.
x will be the features data of the analysis and y the target (quality)
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data_train_len = int(len(dataset)*0.8) # get 80 %
data_test_len = len(dataset) -data_train_len # test = all - train
# ---- Split => x,y with random_split
#
data_train_subset, data_test_subset=random_split(dataset, [data_train_len, data_test_len])
x_train = data_train_subset[:]["features"]
y_train = data_train_subset[:]["quality" ]
x_test = data_test_subset [:]["features"]
y_test = data_test_subset [:]["quality" ]
print('Original data shape was : ',dataset.data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Cell type:markdown id: tags:
### 3.3 - For Training model use Dataloader
The Dataset retrieves our dataset’s features and labels one sample at a time. While training a model, we typically want to pass samples in minibatches, reshuffle the data at every epoch to reduce model overfitting. DataLoader is an iterable that abstracts this complexity for us in an easy API.
%% Cell type:code id: tags:
``` python
# train bacth data
train_loader= DataLoader(
dataset=data_train_subset,
shuffle=True,
batch_size=20,
num_workers=2
)
# test bacth data
test_loader= DataLoader(
dataset=data_test_subset,
shuffle=False,
batch_size=20,
num_workers=2
)
```
%% Cell type:markdown id: tags:
## Step 4 - Build a model
More informations about :
- [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers)
- [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
- [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
- [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)
%% Cell type:code id: tags:
``` python
class LitRegression(pl.LightningModule):
def __init__(self,in_features=11):
super().__init__()
self.model = nn.Sequential(
nn.Linear(in_features, 128), # hidden layer 1
nn.ReLU(), # activation function
nn.Linear(128, 128), # hidden layer 2
nn.ReLU(), # activation function
nn.Linear(128, 1)) # output layer
def forward(self, x): # forward pass
x = self.model(x)
return x
# optimizer
def configure_optimizers(self):
optimizer = torch.optim.RMSprop(self.parameters(),lr=1e-4)
return optimizer
def training_step(self, batch, batch_idx):
# defines the train loop.
x_features, y_target = batch["features"],batch["quality"]
# forward pass
y_pred = self.model(x_features)
# loss function MSE
loss = F.mse_loss(y_pred, y_target)
# metrics mae
mae = mean_absolute_error(y_pred,y_target)
# metrics mse
mse = mean_squared_error(y_pred,y_target)
metrics= {"train_loss": loss,
"train_mae" : mae,
"train_mse" : mse
}
# logs metrics for each training_step
self.log_dict(metrics,
on_step = False,
on_epoch = True,
logger = True,
prog_bar = True,
)
return loss
def validation_step(self, batch, batch_idx):
# defines the val loop.
x_features, y_target = batch["features"],batch["quality"]
# forward pass
y_pred = self.model(x_features)
# loss function MSE
loss = F.mse_loss(y_pred, y_target)
# metrics
mae = mean_absolute_error(y_pred,y_target)
# metrics
mse = mean_squared_error(y_pred,y_target)
metrics= {"val_loss": loss,
"val_mae" : mae,
"val_mse" : mse
}
# logs metrics for each validation_step
self.log_dict(metrics,
on_step = False,
on_epoch = True,
logger = True,
prog_bar = True,
)
return metrics
```
%% Cell type:markdown id: tags:
## 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
reg=LitRegression(in_features=11)
print(reg)
```
%% Cell type:markdown id: tags:
### 5.2 - Add callback
%% Cell type:code id: tags:
``` python
os.makedirs('./run/models', exist_ok=True)
save_dir = "./run/models/"
filename ='best-model-{epoch}-{val_loss:.2f}'
savemodel_callback = pl.callbacks.ModelCheckpoint(dirpath=save_dir,
filename=filename,
save_top_k=1,
verbose=False,
monitor="val_loss"
)
```
%% Cell type:markdown id: tags:
### 5.3 - Train it
%% Cell type:code id: tags:
``` python
# loggers data
os.makedirs(f'{run_dir}/logs', mode=0o750, exist_ok=True)
logger= TensorBoardLogger(save_dir=f'{run_dir}/logs',name="reg_logs")
```
%% Cell type:code id: tags:
``` python
# train model
trainer = pl.Trainer(accelerator='auto',
max_epochs=100,
logger=logger,
num_sanity_val_steps=0,
callbacks=[savemodel_callback,CustomTrainProgressBar()])
trainer.fit(model=reg, train_dataloaders=train_loader, val_dataloaders=test_loader)
```
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score=trainer.validate(model=reg, dataloaders=test_loader, verbose=False)
print('x_test / loss : {:5.4f}'.format(score[0]['val_loss']))
print('x_test / mae : {:5.4f}'.format(score[0]['val_mae']))
print('x_test / mse : {:5.4f}'.format(score[0]['val_mse']))
```
%% Cell type:markdown id: tags:
### 6.2 - Training history
To access logs with tensorboad :
- Under **Docker**, from a terminal launched via the jupyterlab launcher, use the following command:<br>
```tensorboard --logdir <path-to-logs> --host 0.0.0.0```
- If you're **not using Docker**, from a terminal :<br>
```tensorboard --logdir <path-to-logs>```
**Note:** One tensorboard instance can be used simultaneously.
%% Cell type:markdown id: tags:
## Step 7 - Restore a model :
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
# Load the model from a checkpoint
loaded_model = LitRegression.load_from_checkpoint(savemodel_callback.best_model_path)
print("Loaded:")
print(loaded_model)
```
%% Cell type:markdown id: tags:
### 7.2 - Evaluate it :
%% Cell type:code id: tags:
``` python
score=trainer.validate(model=loaded_model, dataloaders=test_loader, verbose=False)
print('x_test / loss : {:5.4f}'.format(score[0]['val_loss']))
print('x_test / mae : {:5.4f}'.format(score[0]['val_mae']))
print('x_test / mse : {:5.4f}'.format(score[0]['val_mse']))
```
%% Cell type:markdown id: tags:
### 7.3 - Make a prediction
%% Cell type:code id: tags:
``` python
# ---- Pick n entries from our test set
n = 200
ii = np.random.randint(1,len(x_test),n)
x_sample = x_test[ii]
y_sample = y_test[ii]
```
%% Cell type:code id: tags:
``` python
# ---- Make a predictions :
# Sets the model in evaluation mode.
loaded_model.eval()
# Perform inference using the loaded model
y_pred = loaded_model( x_sample )
```
%% Cell type:code id: tags:
``` python
# ---- Show it
print('Wine Prediction Real Delta')
for i in range(n):
pred = y_pred[i][0].item()
real = y_sample[i][0].item()
delta = real-pred
print(f'{i:03d} {pred:.2f} {real} {delta:+.2f} ')
```
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
%% Cell type:code id: tags:
``` python
```
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___|
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/SARI/DEVLOG 2023
# ------------------------------------------------------------------
# 2.0 version by Achille Mbogol Touye (EFELIA-MIAI/SIMAP¨), sep 2023
import torch
import pandas as pd
import lightning.pytorch as pl
class WineQualityDataset(pl.LightningDataModule):
"""Wine Quality dataset."""
def __init__(self, csv_file, transform=None):
"""
Args:
csv_file (string): Path to the csv file.
transform (callable, optional): Optional transform to be applied on a sample.
"""
super().__init__()
self.csv_file=csv_file
self.data = pd.read_csv(self.csv_file, header=0, sep=';')
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
features = self.data.iloc[idx, :-1].values.astype('float32')
target = self.data.iloc[idx, -1:].values.astype('float32')
sample = {'features':features, 'quality':target}
if self.transform:
sample = self.transform(sample)
return sample
class Normalize(WineQualityDataset):
"""normalize data"""
def __init__(self, csv_file):
mean,std=self.compute_mean_and_std(csv_file)
self.mean=mean
self.std=std
def compute_mean_and_std(self, csv_file):
"""Compute the mean and std for each feature."""
dataset= WineQualityDataset(csv_file)
mean = dataset.data.iloc[:,:-1].mean(axis=0).values.astype('float32')
std = dataset.data.iloc[:,:-1].std(axis=0).values.astype('float32')
return mean,std
def __call__(self, sample):
features, target = sample['features'],sample['quality']
norm_features = (features - self.mean) / self.std # normalize features
return {'features':norm_features,
'quality':target
}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
features, target = sample['features'], sample['quality']
return {'features': torch.from_numpy(features),
'quality' : torch.from_numpy(target)
}
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___|
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/SARI/DEVLOG 2023
# ------------------------------------------------------------------
# 2.0 version by Achille Mbogol Touye (EFELIA-MIAI/SIMAP¨), sep 2023
from tqdm import tqdm as _tqdm
from lightning.pytorch.callbacks import TQDMProgressBar
# Créez un callback de barre de progression pour afficher les métriques d'entraînement
class CustomTrainProgressBar(TQDMProgressBar):
def __init__(self):
super().__init__()
self._val_progress_bar = _tqdm()
def init_train_tqdm(self):
bar=super().init_train_tqdm()
bar.set_description("Training")
return bar
@property
def val_progress_bar(self):
if self._val_progress_bar is None:
raise ValueError("The `_val_progress_bar` reference has not been set yet.")
return self._val_progress_bar
def on_validation_start(self, trainer, pl_module):
# Désactivez l'affichage de la barre de progression de validation
self.val_progress_bar.disable = True
\ No newline at end of file
# base image
ARG PYTHON_VERSION=3.9
ARG docker_image_base=python:${PYTHON_VERSION}-slim
FROM ${docker_image_base}
# maintainers
LABEL maintainer1=soraya.arias@inria.fr maintainer2=jean-luc.parouty@simap.grenoble-inp.fr
ARG ARCH_VERSION=cpu
ARG BRANCH=pre-master
# Ensure a sane environment
ENV TZ=Europe/Paris LANG=C.UTF-8 LC_ALL=C.UTF-8 DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone && \
apt update --fix-missing && \
apt install -y --no-install-recommends apt-utils \
procps \
python3-venv \
python3-pip && \
apt -y dist-upgrade && \
apt clean && \
rm -fr /var/lib/apt/lists/*
# copy Python requirement packages list in docker image
COPY requirements-${ARCH_VERSION}.txt /root/requirements-${ARCH_VERSION}.txt
# Update Python tools and install requirements packages for Fidle
RUN python3 -m pip install --upgrade pip && \
pip3 install --no-cache-dir --upgrade -r /root/requirements-${ARCH_VERSION}.txt
# Install tensorboard & update jupyter
RUN pip3 install --no-cache-dir --upgrade tensorboard tensorboardX jupyter ipywidgets
# Move default logo python
RUN bin/rm /usr/local/share/jupyter/kernels/python3/logo*
# Change default logo and name kernels
COPY images/env-keras3.png /usr/local/share/jupyter/kernels/python3/logo-64x64.png
COPY images/env-keras3.svg /usr/local/share/jupyter/kernels/python3/logo-svg.svg
# Get Fidle datasets
RUN mkdir /data && \
fid install_datasets --quiet --install_dir /data
# Get Fidle notebooks and create link
RUN mkdir /notebooks/ && \
fid install_notebooks --notebooks fidle-${BRANCH} --quiet --install_dir /notebooks && \
ln -s $(ls -1td /notebooks/* | head -1) /notebooks/last
# Add Jupyter configuration (no browser, listen all interfaces, ...)
COPY jupyter_lab_config.py /root/.jupyter/jupyter_lab_config.py
COPY notebook.json /root/.jupyter/nbconfig/notebook.json
# Jupyter notebook uses 8888
EXPOSE 8888
# tensorboard uses 6006
EXPOSE 6006
VOLUME /notebooks
WORKDIR /notebooks
# Set Keras backend
ENV KERAS_BACKEND torch
# Set Python path to add fidle path
ENV PYTHONPATH=/notebooks/fidle-master/:$PYTHONPATH
# Set default shell (useful in the notebooks)
ENV SHELL=/bin/bash
# Set Fidle dataset directory variable
ENV FIDLE_DATASETS_DIR=/data/datasets-fidle
# Run a notebook by default
CMD ["jupyter", "lab"]
\ No newline at end of file
docker/images/env-keras3.png

2.91 KiB

<?xml version="1.0" encoding="UTF-8"?>
<svg id="Calque_2" data-name="Calque 2" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">
<defs>
<style>
.cls-1 {
fill: #d00000;
}
.cls-1, .cls-2, .cls-3, .cls-4, .cls-5 {
stroke-width: 0px;
}
.cls-2 {
fill: none;
}
.cls-3 {
fill: #fff;
}
.cls-4 {
fill: #e12229;
}
.cls-5 {
fill: #ee4c2c;
}
</style>
</defs>
<g id="Mode_Isolation" data-name="Mode Isolation">
<g>
<rect class="cls-3" width="100" height="100"/>
<g id="group">
<path id="Path" class="cls-5" d="M84.64,15.79l-3.09,3.09c5.06,5.06,5.06,13.21,0,18.17-5.06,5.06-13.21,5.06-18.17,0-5.06-5.06-5.06-13.21,0-18.17l8.01-8.01,1.12-1.12V3.7l-12.08,12.08c-6.75,6.75-6.75,17.61,0,24.36,6.75,6.75,17.61,6.75,24.22,0,6.75-6.79,6.75-17.61,0-24.36Z"/>
<path id="Path-1" class="cls-5" d="M80.85,12.79c0,1.24-1.01,2.25-2.25,2.25s-2.25-1.01-2.25-2.25,1.01-2.25,2.25-2.25,2.25,1.01,2.25,2.25Z"/>
</g>
<g>
<g>
<path class="cls-2" d="M52.97,86.43c-4.89,1.33-6.52,1.26-7.02,1.15.37-.75,2.11-2.39,3.93-3.69.43-.31.54-.91.24-1.35-.3-.44-.89-.55-1.33-.24-2.58,1.83-5.48,4.39-4.67,6.16.31.67.95,1.12,2.5,1.12,1.4,0,3.55-.37,6.85-1.27.51-.14.81-.67.67-1.19-.13-.52-.66-.83-1.17-.69Z"/>
<g>
<path class="cls-4" d="M68.15,44.5c-.34,0-.63-.17-.87-.5-.3-.42-.64-.57-1.3-.57-.2,0-.4.01-.59.03-.22.01-.42.03-.62.03-.32,0-.79-.03-1.23-.27-1.36-.77-1.86-2.52-1.11-3.9.5-.92,1.46-1.5,2.49-1.5.48,0,.96.12,1.38.36,1.06.59,2.99,4.78,2.77,5.62l-.18.7-.74.02Z"/>
<path class="cls-3" d="M64.93,38.75c.31,0,.63.08.92.24.85.48,2.51,4.58,2.3,4.58-.02,0-.05-.03-.1-.11-.58-.82-1.33-.97-2.06-.97-.43,0-.84.05-1.21.05-.29,0-.56-.03-.77-.15-.92-.52-1.26-1.7-.75-2.64.35-.64,1-1.01,1.67-1.01M64.93,36.87c-1.38,0-2.66.76-3.32,1.99-.99,1.83-.33,4.15,1.48,5.16.62.35,1.26.39,1.68.39.21,0,.44-.01.68-.03.17-.01.35-.02.53-.02.41,0,.45.05.53.17.55.79,1.26.9,1.64.9h1.45l.38-1.41c.11-.43.24-.93-.94-3.48-1.06-2.29-1.74-2.9-2.27-3.2-.56-.32-1.2-.48-1.84-.48h0Z"/>
</g>
<path class="cls-4" d="M62.06,75.3c-.39-.47-.34-1.18.12-1.58.46-.4,1.16-.35,1.55.13,5.79,6.92,15.18,8.77,24.52,4.83.95-2.66,1.42-5.45,1.49-8.18,0-7.41-3.53-14.26-9.52-18.38-2.78-1.91-9.2-4.45-17.62-3.04-6.19,1.04-12.61,5.82-15.12,7.97-1.51,1.29-19.5,18.68-27,15.22-5.07-2.35,3.99-10.88-.17-18.68-.11-.21-.41-.23-.55-.04-2.12,2.91-4.18,6.41-7,4.84-1.26-.7-2.39-2.94-3.26-4.36-.18-.28-.61-.14-.6.19.32,9.8,4.97,17.01,8.71,21.57,6.47,7.9,17.8,17.09,36.12,18.95,18.88,1.75,28.93-4.73,33.3-13.21-2.84.96-5.67,1.44-8.4,1.44-6.45,0-12.34-2.63-16.56-7.67ZM53.46,88.31c-3.3.9-5.45,1.27-6.85,1.27-1.55,0-2.19-.45-2.5-1.12-.81-1.77,2.1-4.32,4.67-6.16.43-.3,1.03-.2,1.33.24.3.44.19,1.05-.24,1.35-1.83,1.3-3.56,2.94-3.93,3.69.5.11,2.14.18,7.02-1.15.51-.14,1.03.17,1.17.69.14.52-.16,1.05-.67,1.19Z"/>
<g>
<path class="cls-4" d="M70.65,47.4c-.36,0-.83-.21-1-.82-.32-1.15.43-5.99,2.83-7.43.42-.25.9-.39,1.39-.39,1.04,0,2,.58,2.5,1.51.75,1.38.25,3.13-1.11,3.9-.15.09-.33.18-.53.28-.93.49-2.34,1.22-3.2,2.45-.3.42-.68.49-.88.49h0Z"/>
<path class="cls-3" d="M73.88,39.71c.67,0,1.33.38,1.67,1.02.51.94.17,2.12-.75,2.64s-2.86,1.33-4.04,3.01c-.04.06-.08.09-.11.09-.43,0,.1-5.18,2.31-6.51.29-.17.6-.25.91-.25M73.88,37.83c-.66,0-1.31.18-1.88.52-2.91,1.74-3.65,7.04-3.25,8.48.25.9,1.01,1.5,1.9,1.5.65,0,1.25-.32,1.64-.89.73-1.04,1.97-1.69,2.87-2.16.21-.11.39-.21.55-.29,1.81-1.02,2.47-3.33,1.48-5.17-.67-1.23-1.94-2-3.32-2h0Z"/>
</g>
<g>
<path class="cls-4" d="M70.32,38.97c-.19,0-.68-.07-.96-.67-.34-.73-.85-3.85.48-5.42.42-.5,1.03-.78,1.67-.78.54,0,1.05.2,1.44.56.86.8.91,2.2.09,3.11-.08.09-.17.19-.28.3-.48.5-1.19,1.26-1.48,2.17-.17.54-.62.73-.96.73h0Z"/>
<path class="cls-3" d="M71.52,33.04c.29,0,.58.1.8.31.49.46.51,1.26.03,1.8s-1.54,1.5-1.95,2.81c-.02.06-.04.08-.06.08-.28,0-.88-3.23.23-4.55.25-.3.61-.45.96-.45M71.52,31.16c-.92,0-1.79.41-2.39,1.11-1.6,1.89-1.08,5.4-.61,6.42.52,1.13,1.52,1.22,1.81,1.22.85,0,1.58-.54,1.85-1.39.22-.7.83-1.34,1.27-1.81.11-.12.21-.23.3-.32,1.15-1.29,1.08-3.27-.15-4.42-.56-.52-1.3-.81-2.07-.81h0Z"/>
</g>
</g>
<g>
<ellipse class="cls-3" cx="75.51" cy="68.45" rx="3.52" ry="3.88"/>
<ellipse class="cls-4" cx="76.93" cy="69.31" rx="2.38" ry="2.42"/>
</g>
</g>
<g>
<path class="cls-3" d="M43.24,43.2s0,0,0,0H11.89s0,0,0,0V11.85s0,0,0,0h31.35s0,0,0,0v31.35h0Z"/>
<path class="cls-1" d="M42.72,42.68s0,0,0,0H12.41s0,0,0,0V12.37s0,0,0,0h30.31s0,0,0,0v30.31h0Z"/>
<path class="cls-3" d="M20.68,35.76s.01.05.03.07l.52.52s.05.03.07.03h1.78s.05-.01.07-.03l.52-.52s.03-.05.03-.07v-5.63s.01-.05.03-.07l2.26-2.15s.04-.01.05,0l5.7,8.44s.04.03.06.03h2.52s.05-.02.06-.04l.46-.88s0-.05,0-.07l-6.67-9.66s-.01-.05,0-.06l6.13-6.1s.03-.05.03-.07v-.11s0-.06-.02-.08l-.35-.81s-.04-.04-.06-.04h-2.49s-.05.01-.07.03l-7.62,7.64s-.03.01-.03-.01v-7.01s-.01-.06-.03-.07l-.51-.55s-.05-.03-.07-.03h-1.79s-.05.01-.07.03l-.52.56s-.03.05-.03.07v16.65h0Z"/>
</g>
</g>
</g>
</svg>
\ No newline at end of file
# Configuration file for lab.
#------------------------------------------------------------------------------
# Application(SingletonConfigurable) configuration
#------------------------------------------------------------------------------
## This is an application.
## Set the log level by value or name.
# Choices: any of [0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']
# Default: 30
c.Application.log_level = 'INFO'
#------------------------------------------------------------------------------
# JupyterApp(Application) configuration
#------------------------------------------------------------------------------
## Base class for Jupyter applications
## Answer yes to any prompts.
# Default: False
# c.JupyterApp.answer_yes = False
## Full path of a config file.
# Default: ''
# c.JupyterApp.config_file = ''
## Specify a config file to load.
# Default: ''
# c.JupyterApp.config_file_name = ''
## Generate default config file.
# Default: False
# c.JupyterApp.generate_config = False
## The date format used by logging formatters for %(asctime)s
# See also: Application.log_datefmt
# c.JupyterApp.log_datefmt = '%Y-%m-%d %H:%M:%S'
## The Logging format template
# See also: Application.log_format
# c.JupyterApp.log_format = '[%(name)s]%(highlevel)s %(message)s'
## Set the log level by value or name.
# See also: Application.log_level
# c.JupyterApp.log_level = 30
## Instead of starting the Application, dump configuration to stdout
# See also: Application.show_config
# c.JupyterApp.show_config = False
## Instead of starting the Application, dump configuration to stdout (as JSON)
# See also: Application.show_config_json
# c.JupyterApp.show_config_json = False
#------------------------------------------------------------------------------
# ExtensionApp(JupyterApp) configuration
#------------------------------------------------------------------------------
## Base class for configurable Jupyter Server Extension Applications.
#
# ExtensionApp subclasses can be initialized two ways:
# 1. Extension is listed as a jpserver_extension, and ServerApp calls
# its load_jupyter_server_extension classmethod. This is the
# classic way of loading a server extension.
# 2. Extension is launched directly by calling its `launch_instance`
# class method. This method can be set as a entry_point in
# the extensions setup.py
## Answer yes to any prompts.
# See also: JupyterApp.answer_yes
# c.ExtensionApp.answer_yes = False
## Full path of a config file.
# See also: JupyterApp.config_file
# c.ExtensionApp.config_file = ''
## Specify a config file to load.
# See also: JupyterApp.config_file_name
# c.ExtensionApp.config_file_name = ''
# Default: ''
# c.ExtensionApp.default_url = ''
## Generate default config file.
# See also: JupyterApp.generate_config
# c.ExtensionApp.generate_config = False
## Handlers appended to the server.
# Default: []
# c.ExtensionApp.handlers = []
## The date format used by logging formatters for %(asctime)s
# See also: Application.log_datefmt
# c.ExtensionApp.log_datefmt = '%Y-%m-%d %H:%M:%S'
## The Logging format template
# See also: Application.log_format
# c.ExtensionApp.log_format = '[%(name)s]%(highlevel)s %(message)s'
## Set the log level by value or name.
# See also: Application.log_level
# c.ExtensionApp.log_level = 30
## Whether to open in a browser after starting.
# The specific browser used is platform dependent and
# determined by the python standard library `webbrowser`
# module, unless it is overridden using the --browser
# (ServerApp.browser) configuration option.
# Default: False
# c.ExtensionApp.open_browser = False
## Settings that will passed to the server.
# Default: {}
# c.ExtensionApp.settings = {}
## Instead of starting the Application, dump configuration to stdout
# See also: Application.show_config
# c.ExtensionApp.show_config = False
## Instead of starting the Application, dump configuration to stdout (as JSON)
# See also: Application.show_config_json
# c.ExtensionApp.show_config_json = False
## paths to search for serving static files.
#
# This allows adding javascript/css to be available from the notebook server machine,
# or overriding individual files in the IPython
# Default: []
# c.ExtensionApp.static_paths = []
## Url where the static assets for the extension are served.
# Default: ''
# c.ExtensionApp.static_url_prefix = ''
## Paths to search for serving jinja templates.
#
# Can be used to override templates from notebook.templates.
# Default: []
# c.ExtensionApp.template_paths = []
#------------------------------------------------------------------------------
# LabServerApp(ExtensionApp) configuration
#------------------------------------------------------------------------------
## A Lab Server Application that runs out-of-the-box
## "A list of comma-separated URIs to get the allowed extensions list
#
# .. versionchanged:: 2.0.0
# `LabServerApp.whitetlist_uris` renamed to `allowed_extensions_uris`
# Default: ''
# c.LabServerApp.allowed_extensions_uris = ''
## Answer yes to any prompts.
# See also: JupyterApp.answer_yes
# c.LabServerApp.answer_yes = False
## The application settings directory.
# Default: ''
# c.LabServerApp.app_settings_dir = ''
## The url path for the application.
# Default: '/lab'
# c.LabServerApp.app_url = '/lab'
## Deprecated, use `LabServerApp.blocked_extensions_uris`
# Default: ''
# c.LabServerApp.blacklist_uris = ''
## A list of comma-separated URIs to get the blocked extensions list
#
# .. versionchanged:: 2.0.0
# `LabServerApp.blacklist_uris` renamed to `blocked_extensions_uris`
# Default: ''
# c.LabServerApp.blocked_extensions_uris = ''
## Whether to cache files on the server. This should be `True` except in dev
# mode.
# Default: True
# c.LabServerApp.cache_files = True
## Full path of a config file.
# See also: JupyterApp.config_file
# c.LabServerApp.config_file = ''
## Specify a config file to load.
# See also: JupyterApp.config_file_name
# c.LabServerApp.config_file_name = ''
## Extra paths to look for federated JupyterLab extensions
# Default: []
# c.LabServerApp.extra_labextensions_path = []
## Generate default config file.
# See also: JupyterApp.generate_config
# c.LabServerApp.generate_config = False
## Handlers appended to the server.
# See also: ExtensionApp.handlers
# c.LabServerApp.handlers = []
## Options to pass to the jinja2 environment for this
# Default: {}
# c.LabServerApp.jinja2_options = {}
## The standard paths to look in for federated JupyterLab extensions
# Default: []
# c.LabServerApp.labextensions_path = []
## The url for federated JupyterLab extensions
# Default: ''
# c.LabServerApp.labextensions_url = ''
## The interval delay in seconds to refresh the lists
# Default: 3600
# c.LabServerApp.listings_refresh_seconds = 3600
## The optional kwargs to use for the listings HTTP requests as
# described on https://2.python-requests.org/en/v2.7.0/api/#requests.request
# Default: {}
# c.LabServerApp.listings_request_options = {}
## The listings url.
# Default: ''
# c.LabServerApp.listings_url = ''
## The date format used by logging formatters for %(asctime)s
# See also: Application.log_datefmt
# c.LabServerApp.log_datefmt = '%Y-%m-%d %H:%M:%S'
## The Logging format template
# See also: Application.log_format
# c.LabServerApp.log_format = '[%(name)s]%(highlevel)s %(message)s'
## Set the log level by value or name.
# See also: Application.log_level
# c.LabServerApp.log_level = 30
## Whether to open in a browser after starting.
# See also: ExtensionApp.open_browser
# c.LabServerApp.open_browser = False
## The optional location of the settings schemas directory. If given, a handler
# will be added for settings.
# Default: ''
# c.LabServerApp.schemas_dir = ''
## Settings that will passed to the server.
# See also: ExtensionApp.settings
# c.LabServerApp.settings = {}
## The url path of the settings handler.
# Default: ''
# c.LabServerApp.settings_url = ''
## Instead of starting the Application, dump configuration to stdout
# See also: Application.show_config
# c.LabServerApp.show_config = False
## Instead of starting the Application, dump configuration to stdout (as JSON)
# See also: Application.show_config_json
# c.LabServerApp.show_config_json = False
## The optional location of local static files. If given, a static file handler
# will be added.
# Default: ''
# c.LabServerApp.static_dir = ''
## paths to search for serving static files.
# See also: ExtensionApp.static_paths
# c.LabServerApp.static_paths = []
## Url where the static assets for the extension are served.
# See also: ExtensionApp.static_url_prefix
# c.LabServerApp.static_url_prefix = ''
## Paths to search for serving jinja templates.
# See also: ExtensionApp.template_paths
# c.LabServerApp.template_paths = []
## The application templates directory.
# Default: ''
# c.LabServerApp.templates_dir = ''
## The optional location of the themes directory. If given, a handler will be
# added for themes.
# Default: ''
# c.LabServerApp.themes_dir = ''
## The theme url.
# Default: ''
# c.LabServerApp.themes_url = ''
## The url path of the translations handler.
# Default: ''
# c.LabServerApp.translations_api_url = ''
## The url path of the tree handler.
# Default: ''
# c.LabServerApp.tree_url = ''
## The optional location of the user settings directory.
# Default: ''
# c.LabServerApp.user_settings_dir = ''
## Deprecated, use `LabServerApp.allowed_extensions_uris`
# Default: ''
# c.LabServerApp.whitelist_uris = ''
## The url path of the workspaces API.
# Default: ''
# c.LabServerApp.workspaces_api_url = ''
## The optional location of the saved workspaces directory. If given, a handler
# will be added for workspaces.
# Default: ''
# c.LabServerApp.workspaces_dir = ''
#------------------------------------------------------------------------------
# LabApp(LabServerApp) configuration
#------------------------------------------------------------------------------
##
# See also: LabServerApp.allowed_extensions_uris
# c.LabApp.allowed_extensions_uris = ''
## Answer yes to any prompts.
# See also: JupyterApp.answer_yes
# c.LabApp.answer_yes = False
## The app directory to launch JupyterLab from.
# Default: None
# c.LabApp.app_dir = None
## The application settings directory.
# Default: ''
# c.LabApp.app_settings_dir = ''
## The url path for the application.
# Default: '/lab'
# c.LabApp.app_url = '/lab'
## Deprecated, use `LabServerApp.blocked_extensions_uris`
# See also: LabServerApp.blacklist_uris
# c.LabApp.blacklist_uris = ''
##
# See also: LabServerApp.blocked_extensions_uris
# c.LabApp.blocked_extensions_uris = ''
## Whether to cache files on the server. This should be `True` except in dev
# mode.
# Default: True
# c.LabApp.cache_files = True
## Whether to enable collaborative mode (experimental).
# Default: False
# c.LabApp.collaborative = False
## Full path of a config file.
# See also: JupyterApp.config_file
# c.LabApp.config_file = ''
## Specify a config file to load.
# See also: JupyterApp.config_file_name
# c.LabApp.config_file_name = ''
## Whether to start the app in core mode. In this mode, JupyterLab
# will run using the JavaScript assets that are within the installed
# JupyterLab Python package. In core mode, third party extensions are disabled.
# The `--dev-mode` flag is an alias to this to be used when the Python package
# itself is installed in development mode (`pip install -e .`).
# Default: False
# c.LabApp.core_mode = False
## The default URL to redirect to from `/`
# Default: '/lab'
c.LabApp.default_url = '/lab/tree/README.ipynb'
## Whether to start the app in dev mode. Uses the unpublished local
# JavaScript packages in the `dev_mode` folder. In this case JupyterLab will
# show a red stripe at the top of the page. It can only be used if JupyterLab
# is installed as `pip install -e .`.
# Default: False
# c.LabApp.dev_mode = False
## Whether to expose the global app instance to browser via window.jupyterlab
# Default: False
# c.LabApp.expose_app_in_browser = False
## Whether to load prebuilt extensions in dev mode. This may be
# useful to run and test prebuilt extensions in development installs of
# JupyterLab. APIs in a JupyterLab development install may be
# incompatible with published packages, so prebuilt extensions compiled
# against published packages may not work correctly.
# Default: False
# c.LabApp.extensions_in_dev_mode = False
## Extra paths to look for federated JupyterLab extensions
# Default: []
# c.LabApp.extra_labextensions_path = []
## Generate default config file.
# See also: JupyterApp.generate_config
# c.LabApp.generate_config = False
## Handlers appended to the server.
# See also: ExtensionApp.handlers
# c.LabApp.handlers = []
## Options to pass to the jinja2 environment for this
# Default: {}
# c.LabApp.jinja2_options = {}
## The standard paths to look in for federated JupyterLab extensions
# Default: []
# c.LabApp.labextensions_path = []
## The url for federated JupyterLab extensions
# Default: ''
# c.LabApp.labextensions_url = ''
## The interval delay in seconds to refresh the lists
# See also: LabServerApp.listings_refresh_seconds
# c.LabApp.listings_refresh_seconds = 3600
## The optional kwargs to use for the listings HTTP requests as
# described on https://2.python-requests.org/en/v2.7.0/api/#requests.request
# See also: LabServerApp.listings_request_options
# c.LabApp.listings_request_options = {}
## The listings url.
# Default: ''
# c.LabApp.listings_url = ''
## The date format used by logging formatters for %(asctime)s
# See also: Application.log_datefmt
# c.LabApp.log_datefmt = '%Y-%m-%d %H:%M:%S'
## The Logging format template
# See also: Application.log_format
# c.LabApp.log_format = '[%(name)s]%(highlevel)s %(message)s'
## Set the log level by value or name.
# See also: Application.log_level
# c.LabApp.log_level = 30
## Whether to open in a browser after starting.
# See also: ExtensionApp.open_browser
# c.LabApp.open_browser = False
## The override url for static lab assets, typically a CDN.
# Default: ''
# c.LabApp.override_static_url = ''
## The override url for static lab theme assets, typically a CDN.
# Default: ''
# c.LabApp.override_theme_url = ''
## The optional location of the settings schemas directory. If given, a handler
# will be added for settings.
# Default: ''
# c.LabApp.schemas_dir = ''
## Settings that will passed to the server.
# See also: ExtensionApp.settings
# c.LabApp.settings = {}
## The url path of the settings handler.
# Default: ''
# c.LabApp.settings_url = ''
## Instead of starting the Application, dump configuration to stdout
# See also: Application.show_config
# c.LabApp.show_config = False
## Instead of starting the Application, dump configuration to stdout (as JSON)
# See also: Application.show_config_json
# c.LabApp.show_config_json = False
## Splice source packages into app directory.
# Default: False
# c.LabApp.splice_source = False
## The optional location of local static files. If given, a static file handler
# will be added.
# Default: ''
# c.LabApp.static_dir = ''
## paths to search for serving static files.
# See also: ExtensionApp.static_paths
# c.LabApp.static_paths = []
## Url where the static assets for the extension are served.
# See also: ExtensionApp.static_url_prefix
# c.LabApp.static_url_prefix = ''
## Paths to search for serving jinja templates.
# See also: ExtensionApp.template_paths
# c.LabApp.template_paths = []
## The application templates directory.
# Default: ''
# c.LabApp.templates_dir = ''
## The optional location of the themes directory. If given, a handler will be
# added for themes.
# Default: ''
# c.LabApp.themes_dir = ''
## The theme url.
# Default: ''
# c.LabApp.themes_url = ''
## The url path of the translations handler.
# Default: ''
# c.LabApp.translations_api_url = ''
## The url path of the tree handler.
# Default: ''
# c.LabApp.tree_url = ''
## The directory for user settings.
# Default: '/root/.jupyter/lab/user-settings'
# c.LabApp.user_settings_dir = '/root/.jupyter/lab/user-settings'
## Whether to serve the app in watch mode
# Default: False
# c.LabApp.watch = False
## Deprecated, use `LabServerApp.allowed_extensions_uris`
# See also: LabServerApp.whitelist_uris
# c.LabApp.whitelist_uris = ''
## The url path of the workspaces API.
# Default: ''
# c.LabApp.workspaces_api_url = ''
## The directory for workspaces
# Default: '/root/.jupyter/lab/workspaces'
# c.LabApp.workspaces_dir = '/root/.jupyter/lab/workspaces'
#------------------------------------------------------------------------------
# ServerApp(JupyterApp) configuration
#------------------------------------------------------------------------------
## Set the Access-Control-Allow-Credentials: true header
# Default: False
# c.ServerApp.allow_credentials = False
## Set the Access-Control-Allow-Origin header
#
# Use '*' to allow any origin to access your server.
#
# Takes precedence over allow_origin_pat.
# Default: ''
# c.ServerApp.allow_origin = ''
## Use a regular expression for the Access-Control-Allow-Origin header
#
# Requests from an origin matching the expression will get replies with:
#
# Access-Control-Allow-Origin: origin
#
# where `origin` is the origin of the request.
#
# Ignored if allow_origin is set.
# Default: ''
# c.ServerApp.allow_origin_pat = ''
## Allow password to be changed at login for the Jupyter server.
#
# While logging in with a token, the Jupyter server UI will give the opportunity to
# the user to enter a new password at the same time that will replace
# the token login mechanism.
#
# This can be set to false to prevent changing password from
# the UI/API.
# Default: True
c.ServerApp.allow_password_change = False
## Allow requests where the Host header doesn't point to a local server
#
# By default, requests get a 403 forbidden response if the 'Host' header
# shows that the browser thinks it's on a non-local domain.
# Setting this option to True disables this check.
#
# This protects against 'DNS rebinding' attacks, where a remote web server
# serves you a page and then changes its DNS to send later requests to a
# local IP, bypassing same-origin checks.
#
# Local IP addresses (such as 127.0.0.1 and ::1) are allowed as local,
# along with hostnames configured in local_hostnames.
# Default: False
# c.ServerApp.allow_remote_access = False
## Whether to allow the user to run the server as root.
# Default: False
c.ServerApp.allow_root = True
## Answer yes to any prompts.
# See also: JupyterApp.answer_yes
# c.ServerApp.answer_yes = False
## "
# Require authentication to access prometheus metrics.
# Default: True
# c.ServerApp.authenticate_prometheus = True
## Reload the webapp when changes are made to any Python src files.
# Default: False
# c.ServerApp.autoreload = False
## The base URL for the Jupyter server.
#
# Leading and trailing slashes can be omitted,
# and will automatically be added.
# Default: '/'
# c.ServerApp.base_url = '/'
## Specify what command to use to invoke a web
# browser when starting the server. If not specified, the
# default browser will be determined by the `webbrowser`
# standard library module, which allows setting of the
# BROWSER environment variable to override it.
# Default: ''
# c.ServerApp.browser = ''
## The full path to an SSL/TLS certificate file.
# Default: ''
# c.ServerApp.certfile = ''
## The full path to a certificate authority certificate for SSL/TLS client
# authentication.
# Default: ''
# c.ServerApp.client_ca = ''
## Full path of a config file.
# See also: JupyterApp.config_file
# c.ServerApp.config_file = ''
## Specify a config file to load.
# See also: JupyterApp.config_file_name
# c.ServerApp.config_file_name = ''
## The config manager class to use
# Default: 'jupyter_server.services.config.manager.ConfigManager'
# c.ServerApp.config_manager_class = 'jupyter_server.services.config.manager.ConfigManager'
## The content manager class to use.
# Default: 'jupyter_server.services.contents.largefilemanager.LargeFileManager'
# c.ServerApp.contents_manager_class = 'jupyter_server.services.contents.largefilemanager.LargeFileManager'
## Extra keyword arguments to pass to `set_secure_cookie`. See tornado's
# set_secure_cookie docs for details.
# Default: {}
# c.ServerApp.cookie_options = {}
## The random bytes used to secure cookies.
# By default this is a new random number every time you start the server.
# Set it to a value in a config file to enable logins to persist across server sessions.
#
# Note: Cookie secrets should be kept private, do not share config files with
# cookie_secret stored in plaintext (you can read the value from a file).
# Default: b''
# c.ServerApp.cookie_secret = b''
## The file where the cookie secret is stored.
# Default: ''
# c.ServerApp.cookie_secret_file = ''
## Override URL shown to users.
#
# Replace actual URL, including protocol, address, port and base URL,
# with the given value when displaying URL to the users. Do not change
# the actual connection URL. If authentication token is enabled, the
# token is added to the custom URL automatically.
#
# This option is intended to be used when the URL to display to the user
# cannot be determined reliably by the Jupyter server (proxified
# or containerized setups for example).
# Default: ''
# c.ServerApp.custom_display_url = ''
## The default URL to redirect to from `/`
# Default: '/'
c.ServerApp.default_url = '/lab/tree/README.ipynb'
## Disable cross-site-request-forgery protection
#
# Jupyter notebook 4.3.1 introduces protection from cross-site request forgeries,
# requiring API requests to either:
#
# - originate from pages served by this server (validated with XSRF cookie and token), or
# - authenticate with a token
#
# Some anonymous compute resources still desire the ability to run code,
# completely without authentication.
# These services can disable all authentication and security checks,
# with the full knowledge of what that implies.
# Default: False
# c.ServerApp.disable_check_xsrf = False
## handlers that should be loaded at higher priority than the default services
# Default: []
# c.ServerApp.extra_services = []
## Extra paths to search for serving static files.
#
# This allows adding javascript/css to be available from the Jupyter server machine,
# or overriding individual files in the IPython
# Default: []
# c.ServerApp.extra_static_paths = []
## Extra paths to search for serving jinja templates.
#
# Can be used to override templates from jupyter_server.templates.
# Default: []
# c.ServerApp.extra_template_paths = []
## Open the named file when the application is launched.
# Default: ''
# c.ServerApp.file_to_run = ''
## The URL prefix where files are opened directly.
# Default: 'notebooks'
# c.ServerApp.file_url_prefix = 'notebooks'
## Generate default config file.
# See also: JupyterApp.generate_config
# c.ServerApp.generate_config = False
## Extra keyword arguments to pass to `get_secure_cookie`. See tornado's
# get_secure_cookie docs for details.
# Default: {}
# c.ServerApp.get_secure_cookie_kwargs = {}
## (bytes/sec)
# Maximum rate at which stream output can be sent on iopub before they are
# limited.
# Default: 1000000
# c.ServerApp.iopub_data_rate_limit = 1000000
## (msgs/sec)
# Maximum rate at which messages can be sent on iopub before they are
# limited.
# Default: 1000
# c.ServerApp.iopub_msg_rate_limit = 1000
## The IP address the Jupyter server will listen on.
# Default: 'localhost'
c.ServerApp.ip = '0.0.0.0'
## Supply extra arguments that will be passed to Jinja environment.
# Default: {}
# c.ServerApp.jinja_environment_options = {}
## Extra variables to supply to jinja templates when rendering.
# Default: {}
# c.ServerApp.jinja_template_vars = {}
## Dict of Python modules to load as Jupyter server extensions.Entry values can
# be used to enable and disable the loading ofthe extensions. The extensions
# will be loaded in alphabetical order.
# Default: {}
# c.ServerApp.jpserver_extensions = {}
## The kernel manager class to use.
# Default: 'jupyter_server.services.kernels.kernelmanager.AsyncMappingKernelManager'
# c.ServerApp.kernel_manager_class = 'jupyter_server.services.kernels.kernelmanager.AsyncMappingKernelManager'
## The kernel spec manager class to use. Should be a subclass of
# `jupyter_client.kernelspec.KernelSpecManager`.
#
# The Api of KernelSpecManager is provisional and might change without warning
# between this version of Jupyter and the next stable one.
# Default: 'jupyter_client.kernelspec.KernelSpecManager'
# c.ServerApp.kernel_spec_manager_class = 'jupyter_client.kernelspec.KernelSpecManager'
## Preferred kernel message protocol over websocket to use (default: None). If an
# empty string is passed, select the legacy protocol. If None, the selected
# protocol will depend on what the front-end supports (usually the most recent
# protocol supported by the back-end and the front-end).
# Default: None
# c.ServerApp.kernel_ws_protocol = None
## The full path to a private key file for usage with SSL/TLS.
# Default: ''
# c.ServerApp.keyfile = ''
## Whether to limit the rate of IOPub messages (default: True). If True, use
# iopub_msg_rate_limit, iopub_data_rate_limit and/or rate_limit_window to tune
# the rate.
# Default: True
# c.ServerApp.limit_rate = True
## Hostnames to allow as local when allow_remote_access is False.
#
# Local IP addresses (such as 127.0.0.1 and ::1) are automatically accepted
# as local as well.
# Default: ['localhost']
# c.ServerApp.local_hostnames = ['localhost']
## The date format used by logging formatters for %(asctime)s
# See also: Application.log_datefmt
# c.ServerApp.log_datefmt = '%Y-%m-%d %H:%M:%S'
## The Logging format template
# See also: Application.log_format
# c.ServerApp.log_format = '[%(name)s]%(highlevel)s %(message)s'
## Set the log level by value or name.
# See also: Application.log_level
# c.ServerApp.log_level = 30
## The login handler class to use.
# Default: 'jupyter_server.auth.login.LoginHandler'
# c.ServerApp.login_handler_class = 'jupyter_server.auth.login.LoginHandler'
## The logout handler class to use.
# Default: 'jupyter_server.auth.logout.LogoutHandler'
# c.ServerApp.logout_handler_class = 'jupyter_server.auth.logout.LogoutHandler'
## Sets the maximum allowed size of the client request body, specified in the
# Content-Length request header field. If the size in a request exceeds the
# configured value, a malformed HTTP message is returned to the client.
#
# Note: max_body_size is applied even in streaming mode.
# Default: 536870912
# c.ServerApp.max_body_size = 536870912
## Gets or sets the maximum amount of memory, in bytes, that is allocated for use
# by the buffer manager.
# Default: 536870912
# c.ServerApp.max_buffer_size = 536870912
## Gets or sets a lower bound on the open file handles process resource limit.
# This may need to be increased if you run into an OSError: [Errno 24] Too many
# open files. This is not applicable when running on Windows.
# Default: 0
# c.ServerApp.min_open_files_limit = 0
## DEPRECATED, use root_dir.
# Default: ''
# c.ServerApp.notebook_dir = ''
## Whether to open in a browser after starting.
# The specific browser used is platform dependent and
# determined by the python standard library `webbrowser`
# module, unless it is overridden using the --browser
# (ServerApp.browser) configuration option.
# Default: False
c.ServerApp.open_browser = False
## Hashed password to use for web authentication.
#
# To generate, type in a python/IPython shell:
#
# from jupyter_server.auth import passwd; passwd()
#
# The string should be of the form type:salt:hashed-
# password.
# Default: ''
c.ServerApp.password = ''
## Forces users to use a password for the Jupyter server.
# This is useful in a multi user environment, for instance when
# everybody in the LAN can access each other's machine through ssh.
#
# In such a case, serving on localhost is not secure since
# any user can connect to the Jupyter server via ssh.
# Default: False
# c.ServerApp.password_required = False
## The port the server will listen on (env: JUPYTER_PORT).
# Default: 0
c.ServerApp.port = 8888
## The number of additional ports to try if the specified port is not available
# (env: JUPYTER_PORT_RETRIES).
# Default: 50
# c.ServerApp.port_retries = 50
## Preferred starting directory to use for notebooks and kernels.
# Default: ''
# c.ServerApp.preferred_dir = ''
## DISABLED: use %pylab or %matplotlib in the notebook to enable matplotlib.
# Default: 'disabled'
# c.ServerApp.pylab = 'disabled'
## If True, display controls to shut down the Jupyter server, such as menu items
# or buttons.
# Default: True
c.ServerApp.quit_button = True
## (sec) Time window used to
# check the message and data rate limits.
# Default: 3
# c.ServerApp.rate_limit_window = 3
## Reraise exceptions encountered loading server extensions?
# Default: False
# c.ServerApp.reraise_server_extension_failures = False
## The directory to use for notebooks and kernels.
# Default: ''
import os
#os.environ['FIDLE_MASTER_VERSION'] = '2.4.1'
#fidle_master_version = os.environ.get('FIDLE_MASTER_VERSION')
c.ServerApp.root_dir = f'/notebooks/last'
## The session manager class to use.
# Default: 'jupyter_server.services.sessions.sessionmanager.SessionManager'
# c.ServerApp.session_manager_class = 'jupyter_server.services.sessions.sessionmanager.SessionManager'
## Instead of starting the Application, dump configuration to stdout
# See also: Application.show_config
# c.ServerApp.show_config = False
## Instead of starting the Application, dump configuration to stdout (as JSON)
# See also: Application.show_config_json
# c.ServerApp.show_config_json = False
## Shut down the server after N seconds with no kernels or terminals running and
# no activity. This can be used together with culling idle kernels
# (MappingKernelManager.cull_idle_timeout) to shutdown the Jupyter server when
# it's not in use. This is not precisely timed: it may shut down up to a minute
# later. 0 (the default) disables this automatic shutdown.
# Default: 0
# c.ServerApp.shutdown_no_activity_timeout = 0
## The UNIX socket the Jupyter server will listen on.
# Default: ''
# c.ServerApp.sock = ''
## The permissions mode for UNIX socket creation (default: 0600).
# Default: '0600'
# c.ServerApp.sock_mode = '0600'
## Supply SSL options for the tornado HTTPServer.
# See the tornado docs for details.
# Default: {}
# c.ServerApp.ssl_options = {}
## Supply overrides for terminado. Currently only supports "shell_command".
# Default: {}
# c.ServerApp.terminado_settings = {}
## Set to False to disable terminals.
#
# This does *not* make the server more secure by itself.
# Anything the user can in a terminal, they can also do in a notebook.
#
# Terminals may also be automatically disabled if the terminado package
# is not available.
# Default: True
# c.ServerApp.terminals_enabled = True
## Token used for authenticating first-time connections to the server.
#
# The token can be read from the file referenced by JUPYTER_TOKEN_FILE or set directly
# with the JUPYTER_TOKEN environment variable.
#
# When no password is enabled,
# the default is to generate a new, random token.
#
# Setting to an empty string disables authentication altogether, which
# is NOT RECOMMENDED.
# Default: '<generated>'
# c.ServerApp.token = '<generated>'
## Supply overrides for the tornado.web.Application that the Jupyter server uses.
# Default: {}
# c.ServerApp.tornado_settings = {}
## Whether to trust or not X-Scheme/X-Forwarded-Proto and X-Real-Ip/X-Forwarded-
# For headerssent by the upstream reverse proxy. Necessary if the proxy handles
# SSL
# Default: False
# c.ServerApp.trust_xheaders = False
## Disable launching browser by redirect file
# For versions of notebook > 5.7.2, a security feature measure was added that
# prevented the authentication token used to launch the browser from being visible.
# This feature makes it difficult for other users on a multi-user system from
# running code in your Jupyter session as you.
# However, some environments (like Windows Subsystem for Linux (WSL) and Chromebooks),
# launching a browser using a redirect file can lead the browser failing to load.
# This is because of the difference in file structures/paths between the runtime and
# the browser.
#
# Disabling this setting to False will disable this behavior, allowing the browser
# to launch by using a URL and visible token (as before).
# Default: True
# c.ServerApp.use_redirect_file = True
## Specify where to open the server on startup. This is the
# `new` argument passed to the standard library method `webbrowser.open`.
# The behaviour is not guaranteed, but depends on browser support. Valid
# values are:
#
# - 2 opens a new tab,
# - 1 opens a new window,
# - 0 opens in an existing window.
#
# See the `webbrowser.open` documentation for details.
# Default: 2
# c.ServerApp.webbrowser_open_new = 2
## Set the tornado compression options for websocket connections.
#
# This value will be returned from
# :meth:`WebSocketHandler.get_compression_options`. None (default) will disable
# compression. A dict (even an empty one) will enable compression.
#
# See the tornado docs for WebSocketHandler.get_compression_options for details.
# Default: None
# c.ServerApp.websocket_compression_options = None
## The base URL for websockets,
# if it differs from the HTTP server (hint: it almost certainly doesn't).
#
# Should be in the form of an HTTP origin: ws[s]://hostname[:port]
# Default: ''
# c.ServerApp.websocket_url = ''