Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • daconcea/fidle
  • bossardl/fidle
  • Julie.Remenant/fidle
  • abijolao/fidle
  • monsimau/fidle
  • karkars/fidle
  • guilgautier/fidle
  • cailletr/fidle
  • talks/fidle
9 results
Show changes
Showing
with 2203 additions and 270 deletions
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [K3VAE2] - VAE, using a custom model class (MNIST dataset)
<!-- DESC --> Construction and training of a VAE, using model subclass, with a latent space of small dimension.
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- Understanding and implementing a **variational autoencoder** neurals network (VAE)
- Understanding a still more **advanced programming model**, using a **custom model**
The calculation needs being important, it is preferable to use a very simple dataset such as MNIST to start with.
...MNIST with a small scale if you haven't a GPU ;-)
## What we're going to do :
- Defining a VAE model
- Build the model
- Train it
- Have a look on the train process
## Acknowledgements :
Thanks to **François Chollet** who is at the base of this example (and the creator of Keras !!).
See : https://keras.io/examples/generative/vae
%% Cell type:markdown id: tags:
## Step 1 - Init python stuff
%% Cell type:code id: tags:
``` python
import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
from keras import layers
import numpy as np
from modules.models import VAE
from modules.layers import SamplingLayer
from modules.callbacks import ImagesCallback
from modules.datagen import MNIST
import matplotlib.pyplot as plt
import scipy.stats
import sys
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3VAE2')
VAE.about()
```
%% Cell type:markdown id: tags:
## Step 2 - Parameters
`scale` : with scale=1, we need 1'30s on a GPU V100 ...and >20' on a CPU !
`latent_dim` : 2 dimensions is small, but usefull to draw !
`fit_verbosity`: Verbosity of training progress bar: 0=silent, 1=progress bar, 2=One line
`loss_weights` : Our **loss function** is the weighted sum of two loss:
- `r_loss` which measures the loss during reconstruction.
- `kl_loss` which measures the dispersion.
The weights are defined by: `loss_weights=[k1,k2]` where : `total_loss = k1*r_loss + k2*kl_loss`
In practice, a value of \[1,.06\] gives good results here.
%% Cell type:code id: tags:
``` python
latent_dim = 6
loss_weights = [1,.06]
scale = .2
seed = 123
batch_size = 64
epochs = 4
fit_verbosity = 1
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('latent_dim', 'loss_weights', 'scale', 'seed', 'batch_size', 'epochs', 'fit_verbosity')
```
%% Cell type:markdown id: tags:
## Step 3 - Prepare data
`MNIST.get_data()` return : `x_train,y_train, x_test,y_test`, \
but we only need x_train for our training.
%% Cell type:code id: tags:
``` python
x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 )
fidle.scrawler.images(x_data[:20], None, indices='all', columns=10, x_size=1,y_size=1,y_padding=0, save_as='01-original')
```
%% Cell type:markdown id: tags:
## Step 4 - Build model
In this example, we will use a **custom model**.
For this, we will use :
- `SamplingLayer`, which generates a vector z from the parameters z_mean and z_log_var - See : [SamplingLayer.py](./modules/layers/SamplingLayer.py)
- `VAE`, a custom model with a specific train_step - See : [VAE.py](./modules/models/VAE.py)
%% Cell type:markdown id: tags:
#### Encoder
%% Cell type:code id: tags:
``` python
inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(32, 3, strides=1, padding="same", activation="relu")(inputs)
x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2D(64, 3, strides=1, padding="same", activation="relu")(x)
x = layers.Flatten()(x)
x = layers.Dense(16, activation="relu")(x)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = SamplingLayer()([z_mean, z_log_var])
encoder = keras.Model(inputs, [z_mean, z_log_var, z], name="encoder")
encoder.compile()
```
%% Cell type:markdown id: tags:
#### Decoder
%% Cell type:code id: tags:
``` python
inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(7 * 7 * 64, activation="relu")(inputs)
x = layers.Reshape((7, 7, 64))(x)
x = layers.Conv2DTranspose(64, 3, strides=1, padding="same", activation="relu")(x)
x = layers.Conv2DTranspose(64, 3, strides=2, padding="same", activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, strides=2, padding="same", activation="relu")(x)
outputs = layers.Conv2DTranspose(1, 3, padding="same", activation="sigmoid")(x)
decoder = keras.Model(inputs, outputs, name="decoder")
decoder.compile()
```
%% Cell type:markdown id: tags:
#### VAE
`VAE` is a custom model with a specific train_step - See : [VAE.py](./modules/models/VAE.py)
%% Cell type:code id: tags:
``` python
vae = VAE(encoder, decoder, loss_weights)
vae.compile(optimizer='adam')
```
%% Cell type:markdown id: tags:
## Step 5 - Train
### 5.1 - Using two nice custom callbacks :-)
Two custom callbacks are used:
- `ImagesCallback` : qui va sauvegarder des images durant l'apprentissage - See [ImagesCallback.py](./modules/callbacks/ImagesCallback.py)
- `BestModelCallback` : qui sauvegardera le meilleur model - See [BestModelCallback.py](./modules/callbacks/BestModelCallback.py)
%% Cell type:code id: tags:
``` python
callback_images = ImagesCallback(x=x_data, z_dim=latent_dim, nb_images=5, from_z=True, from_random=True, run_dir=run_dir)
callbacks_list = [callback_images]
```
%% Cell type:markdown id: tags:
### 5.2 - Let's train !
With `scale=1`, need 1'15 on a GPU (V100 at IDRIS) ...or 20' on a CPU
%% Cell type:code id: tags:
``` python
chrono=fidle.Chrono()
chrono.start()
history = vae.fit(x_data, epochs=epochs, batch_size=batch_size, callbacks=callbacks_list, verbose=fit_verbosity)
chrono.show()
```
%% Cell type:markdown id: tags:
## Step 6 - Training review
### 6.1 - History
%% Cell type:code id: tags:
``` python
fidle.scrawler.history(history, plot={"Loss":['loss']}, save_as='history')
```
%% Cell type:markdown id: tags:
### 6.2 - Reconstruction during training
At the end of each epoch, our callback saved some reconstructed images.
Where :
Original image -> encoder -> z -> decoder -> Reconstructed image
%% Cell type:code id: tags:
``` python
images_z, images_r = callback_images.get_images( range(0,epochs,2) )
fidle.utils.subtitle('Original images :')
fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as='02-original')
fidle.utils.subtitle('Encoded/decoded images')
fidle.scrawler.images(images_z, None, indices='all', columns=5, x_size=2,y_size=2, save_as='03-reconstruct')
fidle.utils.subtitle('Original images :')
fidle.scrawler.images(x_data[:5], None, indices='all', columns=5, x_size=2,y_size=2, save_as=None)
```
%% Cell type:markdown id: tags:
### 6.3 - Generation (latent -> decoder) during training
%% Cell type:code id: tags:
``` python
fidle.utils.subtitle('Generated images from latent space')
fidle.scrawler.images(images_r, None, indices='all', columns=5, x_size=2,y_size=2, save_as='04-encoded')
```
%% Cell type:markdown id: tags:
### 6.4 - Save model
%% Cell type:code id: tags:
``` python
os.makedirs(f'{run_dir}/models', exist_ok=True)
vae.save(f'{run_dir}/models/vae_model.keras')
```
%% Cell type:markdown id: tags:
## Step 7 - Model evaluation
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
vae=VAE()
vae.reload(f'{run_dir}/models/vae_model.keras')
```
%% Cell type:markdown id: tags:
### 7.2 - Image reconstruction
%% Cell type:code id: tags:
``` python
# ---- Select few images
x_show = fidle.utils.pick_dataset(x_data, n=10)
# ---- Get latent points and reconstructed images
z_mean, z_var, z = vae.encoder.predict(x_show)
x_reconst = vae.decoder.predict(z)
# ---- Show it
labels=[ str(np.round(z[i],1)) for i in range(10) ]
fidle.scrawler.images(x_show, None, indices='all', columns=10, x_size=2,y_size=2, save_as='05-original')
fidle.scrawler.images(x_reconst, None, indices='all', columns=10, x_size=2,y_size=2, save_as='06-reconstruct')
```
%% Cell type:markdown id: tags:
### 7.3 - Visualization of the latent space
%% Cell type:code id: tags:
``` python
n_show = int(20000*scale)
# ---- Select images
x_show, y_show = fidle.utils.pick_dataset(x_data,y_data, n=n_show)
# ---- Get latent points
z_mean, z_var, z = vae.encoder.predict(x_show)
# ---- Show them
fig = plt.figure(figsize=(14, 10))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=30)
plt.colorbar()
fidle.scrawler.save_fig('07-Latent-space')
plt.show()
```
%% Cell type:markdown id: tags:
### 7.4 - Generative latent space
%% Cell type:code id: tags:
``` python
if latent_dim>2:
print('Sorry, This part can only work if the latent space is of dimension 2')
else:
grid_size = 18
grid_scale = 1
# ---- Draw a ppf grid
grid=[]
for y in scipy.stats.norm.ppf(np.linspace(0.99, 0.01, grid_size),scale=grid_scale):
for x in scipy.stats.norm.ppf(np.linspace(0.01, 0.99, grid_size),scale=grid_scale):
grid.append( (x,y) )
grid=np.array(grid)
# ---- Draw latentspoints and grid
fig = plt.figure(figsize=(10, 8))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=20)
plt.scatter(grid[:, 0] , grid[:, 1], c = 'black', s=60, linewidth=2, marker='+', alpha=1)
fidle.scrawler.save_fig('08-Latent-grid')
plt.show()
# ---- Plot grid corresponding images
x_reconst = vae.decoder.predict([grid])
fidle.scrawler.images(x_reconst, indices='all', columns=grid_size, x_size=0.5,y_size=0.5, y_padding=0,spines_alpha=0.1, save_as='09-Latent-morphing')
```
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [K3VAE3] - Analysis of the VAE's latent space of MNIST dataset
<!-- DESC --> Visualization and analysis of the VAE's latent space of the dataset MNIST
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- First data generation from **latent space**
- Understanding of underlying principles
- Model management
Here, we don't consume data anymore, but we generate them ! ;-)
## What we're going to do :
- Load a saved model
- Reconstruct some images
- Latent space visualization
- Matrix of generated images
%% Cell type:markdown id: tags:
## Step 1 - Init python stuff
%% Cell type:markdown id: tags:
### 1.1 - Init python
%% Cell type:code id: tags:
``` python
import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
from keras import layers
import numpy as np
from modules.models import VAE
from modules.datagen import MNIST
import matplotlib
import matplotlib.pyplot as plt
from barviz import Simplex
from barviz import Collection
import sys
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3VAE3')
```
%% Cell type:markdown id: tags:
### 1.2 - Parameters
%% Cell type:code id: tags:
``` python
scale = 1
seed = 123
models_dir = './run/K3VAE2'
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('scale', 'seed', 'models_dir')
```
%% Cell type:markdown id: tags:
## Step 2 - Get data
%% Cell type:code id: tags:
``` python
x_data, y_data, _,_ = MNIST.get_data(seed=seed, scale=scale, train_prop=1 )
```
%% Cell type:markdown id: tags:
## Step 3 - Reload best model
%% Cell type:code id: tags:
``` python
vae=VAE()
vae.reload(f'{models_dir}/models/vae_model')
```
%% Cell type:markdown id: tags:
## Step 4 - Image reconstruction
%% Cell type:code id: tags:
``` python
# ---- Select few images
x_show = fidle.utils.pick_dataset(x_data, n=10)
# ---- Get latent points and reconstructed images
z_mean, z_var, z = vae.encoder.predict(x_show, verbose=0)
x_reconst = vae.decoder.predict(z, verbose=0)
latent_dim = z.shape[1]
# ---- Show it
labels=[ str(np.round(z[i],1)) for i in range(10) ]
fidle.utils.subtitle('Originals :')
fidle.scrawler.images(x_show, None, indices='all', columns=10, x_size=2,y_size=2, save_as='01-original')
fidle.utils.subtitle('Reconstructed :')
fidle.scrawler.images(x_reconst, None, indices='all', columns=10, x_size=2,y_size=2, save_as='02-reconstruct')
```
%% Cell type:markdown id: tags:
## Step 5 - Visualizing the latent space
%% Cell type:code id: tags:
``` python
n_show = min( 20000, len(x_data) )
# ---- Select images
x_show, y_show = fidle.utils.pick_dataset(x_data,y_data, n=n_show)
# ---- Get latent points
z_mean, z_var, z = vae.encoder.predict(x_show, verbose=0)
```
%% Cell type:markdown id: tags:
### 5.1 - Classic 2d visualisaton
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(14, 10))
plt.scatter(z[:, 2] , z[:, 4], c=y_show, cmap= 'tab10', alpha=0.5, s=30)
plt.colorbar()
fidle.scrawler.save_fig('03-Latent-space')
plt.show()
```
%% Cell type:markdown id: tags:
### 5.2 - Simplex visualisaton
%% Cell type:code id: tags:
``` python
if latent_dim<4:
print('Sorry, This part can only work if the latent space is greater than 3')
else:
# ---- Softmax rescale
#
zs = np.exp(z)/np.sum(np.exp(z),axis=1,keepdims=True)
# zc = zs * 1/np.max(zs)
# ---- Create collection
#
c = Collection(zs, colors=y_show, labels=y_show)
c.attrs.markers_colormap = {'colorscale':'Rainbow','cmin':0,'cmax':latent_dim}
c.attrs.markers_size = 5
c.attrs.markers_border_width = 0
c.attrs.markers_opacity = 0.8
s = Simplex.build(latent_dim)
s.attrs.width = 1000
s.attrs.height = 1000
s.plot(c)
```
%% Cell type:markdown id: tags:
## Step 6 - Generate from latent space (latent_dim==2)
%% Cell type:code id: tags:
``` python
if latent_dim>2:
print('Sorry, This part can only work if the latent space is of dimension 2')
else:
grid_size = 14
grid_scale = 1.
# ---- Draw a ppf grid
grid=[]
for y in scipy.stats.norm.ppf(np.linspace(0.99, 0.01, grid_size),scale=grid_scale):
for x in scipy.stats.norm.ppf(np.linspace(0.01, 0.99, grid_size),scale=grid_scale):
grid.append( (x,y) )
grid=np.array(grid)
# ---- Draw latentspoints and grid
fig = plt.figure(figsize=(12, 10))
plt.scatter(z[:, 0] , z[:, 1], c=y_show, cmap= 'tab10', alpha=0.5, s=20)
plt.scatter(grid[:, 0] , grid[:, 1], c = 'black', s=60, linewidth=2, marker='+', alpha=1)
fidle.scrawler.save_fig('04-Latent-grid')
plt.show()
# ---- Plot grid corresponding images
x_reconst = vae.decoder.predict([grid])
fidle.scrawler.images(x_reconst, indices='all', columns=grid_size, x_size=0.5,y_size=0.5, y_padding=0,spines_alpha=0.1, save_as='05-Latent-morphing')
```
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| ImageCallback
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/SARI/DEVLOG 2020 - S. Arias, E. Maldonado, JL. Parouty
# ------------------------------------------------------------------
# 2.0 version by JL Parouty, feb 2021
from keras.callbacks import Callback
import numpy as np
import matplotlib.pyplot as plt
from skimage import io
import os
class ImagesCallback(Callback):
'''
Save generated (random mode) or encoded/decoded (z mode) images on epoch end.
params:
x : input images, for z mode (None)
z_dim : size of the latent space, for random mode (None)
nb_images : number of images to save
from_z : save images from z (False)
from_random : save images from random (False)
filename : images filename
run_dir : output directory to save images
'''
def __init__(self, x = None,
z_dim = None,
nb_images = 5,
from_z = False,
from_random = False,
filename = 'image-{epoch:03d}-{i:02d}.jpg',
run_dir = './run'):
# ---- Parameters
#
self.x = None if x is None else x[:nb_images]
self.z_dim = z_dim
self.nb_images = nb_images
self.from_z = from_z
self.from_random = from_random
self.filename_z = run_dir + '/images-z/' + filename
self.filename_random = run_dir + '/images-random/' + filename
if from_z: os.makedirs( run_dir + '/images-z/', mode=0o750, exist_ok=True)
if from_random: os.makedirs( run_dir + '/images-random/', mode=0o750, exist_ok=True)
def save_images(self, images, filename, epoch):
'''Save images as <filename>'''
for i,image in enumerate(images):
image = image.squeeze() # Squeeze it if monochrome : (lx,ly,1) -> (lx,ly)
filenamei = filename.format(epoch=epoch,i=i)
if len(image.shape) == 2:
plt.imsave(filenamei, image, cmap='gray_r')
else:
plt.imsave(filenamei, image)
def on_epoch_end(self, epoch, logs={}):
'''Called at the end of each epoch'''
encoder = self.model.get_layer('encoder')
decoder = self.model.get_layer('decoder')
if self.from_random:
z = np.random.normal( size=(self.nb_images,self.z_dim) )
images = decoder.predict(z)
self.save_images(images, self.filename_random, epoch)
if self.from_z:
z_mean, z_var, z = encoder.predict(self.x)
images = decoder.predict(z)
self.save_images(images, self.filename_z, epoch)
def get_images(self, epochs=None, from_z=True,from_random=True):
'''Read and return saved images. epochs is a range'''
if epochs is None : return
images_z = []
images_r = []
for epoch in list(epochs):
for i in range(self.nb_images):
if from_z:
f = self.filename_z.format(epoch=epoch,i=i)
images_z.append( io.imread(f) )
if from_random:
f = self.filename_random.format(epoch=epoch,i=i)
images_r.append( io.imread(f) )
return images_z, images_r
from modules.callbacks.ImagesCallback import ImagesCallback
\ No newline at end of file
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| MNIST Data loader
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/MIAI - https://fidle.cnrs.fr
# ------------------------------------------------------------------
# JL Parouty (Mars 2024)
import h5py
import os
import numpy as np
from hashlib import blake2b
import keras
import keras.datasets.mnist as mnist
# ------------------------------------------------------------------
# A usefull class to manage our MNIST dataset
# This class allows to manage datasets derived from the original MNIST
# ------------------------------------------------------------------
class MNIST():
version = '0.1'
def __init__(self):
pass
@classmethod
def get_data(cls, normalize=True, expand=True, scale=1., train_prop=0.8, shuffle=True, seed=None):
"""
Return original MNIST dataset
args:
normalize : Normalize dataset or not (True)
expand : Reshape images as (28,28,1) instead (28,28) (True)
scale : Scale of dataset to use. 1. mean 100% (1.)
train_prop : Ratio of train/test (0.8)
shuffle : Shuffle data if True (True)
seed : Random seed value. False mean no seed, None mean using /dev/urandom (None)
returns:
x_train,y_train,x_test,y_test
"""
# ---- Seed
#
if seed is not False:
np.random.seed(seed)
print(f'Seeded ({seed})')
# ---- Get data
#
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print('Dataset loaded.')
# ---- Concatenate
#
x_data = np.concatenate([x_train, x_test], axis=0)
y_data = np.concatenate([y_train, y_test])
print('Concatenated.')
# ---- Shuffle
#
if shuffle:
p = np.random.permutation(len(x_data))
x_data, y_data = x_data[p], y_data[p]
print('Shuffled.')
# ---- Rescale
#
n = int(scale*len(x_data))
x_data, y_data = x_data[:n], y_data[:n]
print(f'rescaled ({scale}).')
# ---- Normalization
#
if normalize:
x_data = x_data.astype('float32') / 255.
print('Normalized.')
# ---- Reshape : (28,28) -> (28,28,1)
#
if expand:
x_data = np.expand_dims(x_data, axis=-1)
print('Reshaped.')
# ---- Split
#
n=int(len(x_data)*train_prop)
x_train, x_test = x_data[:n], x_data[n:]
y_train, y_test = y_data[:n], y_data[n:]
print(f'splited ({train_prop}).')
# ---- Hash
#
h = blake2b(digest_size=10)
for a in [x_train,x_test, y_train,y_test]:
h.update(a)
# ---- About and return
#
print('x_train shape is : ', x_train.shape)
print('x_test shape is : ', x_test.shape)
print('y_train shape is : ', y_train.shape)
print('y_test shape is : ', y_test.shape)
print('Blake2b digest is : ', h.hexdigest())
return x_train,y_train, x_test,y_test
from modules.datagen.MNIST import MNIST
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| SamplingLayer
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/MIAI - https://fidle.cnrs.fr
# ------------------------------------------------------------------
# JL Parouty (Mars 2024)
import keras
import torch
from torch.distributions.normal import Normal
# Note : https://keras.io/guides/making_new_layers_and_models_via_subclassing/
class SamplingLayer(keras.layers.Layer):
'''A custom layer that receive (z_mean, z_var) and sample a z vector'''
def call(self, inputs):
z_mean, z_log_var = inputs
batch_size, latent_dim = z_mean.shape
epsilon = Normal(0, 1).sample((batch_size, latent_dim)).to(z_mean.device)
z = z_mean + torch.exp(0.5 * z_log_var) * epsilon
return z
\ No newline at end of file
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| SamplingLayer
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/MIAI - https://fidle.cnrs.fr
# ------------------------------------------------------------------
# JL Parouty (mars 2024)
import keras
import torch
# See : https://keras.io/guides/making_new_layers_and_models_via_subclassing/
class VariationalLossLayer(keras.layers.Layer):
def __init__(self, loss_weights=[3,7]):
super().__init__()
self.k1 = loss_weights[0]
self.k2 = loss_weights[1]
def call(self, inputs):
k1 = self.k1
k2 = self.k2
# ---- Retrieve inputs
#
x, z_mean, z_log_var, y = inputs
# ---- Compute : reconstruction loss
#
r_loss = torch.nn.functional.binary_cross_entropy(y, x, reduction='sum')
#
# ---- Compute : kl_loss
#
kl_loss = - torch.sum(1+ z_log_var - z_mean.pow(2) - z_log_var.exp())
# ---- Compute total loss, and add it
#
loss = r_loss*k1 + kl_loss*k2
self.add_loss(loss)
return y
def get_config(self):
return {'loss_weights':[self.k1,self.k2]}
\ No newline at end of file
from modules.layers.SamplingLayer import SamplingLayer
from modules.layers.VariationalLossLayer import VariationalLossLayer
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___| VAE Example
# ------------------------------------------------------------------
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/MIAI - https://fidle.cnrs.fr
# ------------------------------------------------------------------
# JL Parouty (mars 2024
import numpy as np
import keras
import torch
from IPython.display import display,Markdown
from modules.layers import SamplingLayer
import os
# Note : https://keras.io/guides/making_new_layers_and_models_via_subclassing/
class VAE(keras.Model):
'''
A VAE model, built from given encoder and decoder
'''
version = '2.0'
def __init__(self, encoder=None, decoder=None, loss_weights=[1,1], **kwargs):
'''
VAE instantiation with encoder, decoder and r_loss_factor
args :
encoder : Encoder model
decoder : Decoder model
loss_weights : Weight of the loss functions: reconstruction_loss and kl_loss
r_loss_factor : Proportion of reconstruction loss for global loss (0.3)
return:
None
'''
super(VAE, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.loss_weights = loss_weights
print(f'Fidle VAE is ready :-) loss_weights={list(self.loss_weights)}')
def call(self, inputs):
'''
Model forward pass, when we use our model
args:
inputs : Model inputs
return:
output : Output of the model
'''
z_mean, z_log_var, z = self.encoder(inputs)
output = self.decoder(z)
return output
def train_step(self, input):
'''
Implementation of the training update.
Receive an input, compute loss, get gradient, update weights and return metrics.
Here, our metrics are loss.
args:
inputs : Model inputs
return:
loss : Total loss
r_loss : Reconstruction loss
kl_loss : KL loss
'''
# ---- Get the input we need, specified in the .fit()
#
if isinstance(input, tuple):
input = input[0]
k1,k2 = self.loss_weights
# ---- Reset grad
#
self.zero_grad()
# ---- Forward pass
#
# Get encoder outputs
#
z_mean, z_log_var, z = self.encoder(input)
# ---- Get reconstruction from decoder
#
reconstruction = self.decoder(z)
# ---- Compute loss
# Total loss = Reconstruction loss + KL loss
#
r_loss = torch.nn.functional.binary_cross_entropy(reconstruction, input, reduction='sum')
kl_loss = - torch.sum(1+ z_log_var - z_mean.pow(2) - z_log_var.exp())
loss = r_loss*k1 + kl_loss*k2
# ---- Compute gradients for the weights
#
loss.backward()
# ---- Adjust learning weights
#
trainable_weights = [v for v in self.trainable_weights]
gradients = [v.value.grad for v in trainable_weights]
with torch.no_grad():
self.optimizer.apply(gradients, trainable_weights)
# ---- Update metrics (includes the metric that tracks the loss)
#
for metric in self.metrics:
if metric.name == "loss":
metric.update_state(loss)
else:
metric.update_state(input, reconstruction)
# ---- Return a dict mapping metric names to current value
# Note that it will include the loss (tracked in self.metrics).
#
return {m.name: m.result() for m in self.metrics}
# # ---- Forward pass
# # Run the forward pass and record
# # operations on the GradientTape.
# #
# with tf.GradientTape() as tape:
# # ---- Get encoder outputs
# #
# z_mean, z_log_var, z = self.encoder(input)
# # ---- Get reconstruction from decoder
# #
# reconstruction = self.decoder(z)
# # ---- Compute loss
# # Reconstruction loss, KL loss and Total loss
# #
# reconstruction_loss = k1 * tf.reduce_mean( keras.losses.binary_crossentropy(input, reconstruction) )
# kl_loss = 1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var)
# kl_loss = -tf.reduce_mean(kl_loss) * k2
# total_loss = reconstruction_loss + kl_loss
# # ---- Retrieve gradients from gradient_tape
# # and run one step of gradient descent
# # to optimize trainable weights
# #
# grads = tape.gradient(total_loss, self.trainable_weights)
# self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
# return {
# "loss": total_loss,
# "r_loss": reconstruction_loss,
# "kl_loss": kl_loss,
# }
def predict(self,inputs):
'''Our predict function...'''
z_mean, z_var, z = self.encoder.predict(inputs)
outputs = self.decoder.predict(z)
return outputs
def save(self,filename):
'''Save model in 2 part'''
filename, extension = os.path.splitext(filename)
self.encoder.save(f'{filename}-encoder.keras')
self.decoder.save(f'{filename}-decoder.keras')
def reload(self,filename):
'''Reload a 2 part saved model.'''
filename, extension = os.path.splitext(filename)
self.encoder = keras.models.load_model(f'{filename}-encoder.keras', custom_objects={'SamplingLayer': SamplingLayer})
self.decoder = keras.models.load_model(f'{filename}-decoder.keras')
print('Reloaded.')
@classmethod
def about(cls):
'''Basic whoami method'''
display(Markdown('<br>**FIDLE 2024 - VAE**'))
print('Version :', cls.version)
print('Keras version :', keras.__version__)
from modules.models.VAE import VAE
\ No newline at end of file
%% Cell type:markdown id: tags:
Text Embedding - IMDB dataset
=============================
---
Formation Introduction au Deep Learning (FIDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020
## Variational AutoEncoder (VAE), with MNIST Dataset
%% Cell type:markdown id: tags:
## Step 1 - Init python stuff
%% Cell type:code id: tags:
``` python
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.datasets.imdb as imdb
import models.VAE
from models.VAE import VariationalAutoencoder
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import os,sys,h5py,json
from importlib import reload
sys.path.append('..')
import fidle.pwk as ooo
ooo.init()
```
%% Output
IDLE 2020 - Practical Work Module
Version : 0.2.4
Run time : Sunday 2 February 2020, 19:30:36
Matplotlib style : ../fidle/talk.mplstyle
TensorFlow version : 2.0.0
Keras version : 2.2.4-tf
%% Cell type:code id: tags:
``` python
reload(models.VAE)
input_shape = (28,28,1)
z_dim = 2
encoder= [ {'type':'Conv2D', 'filters':32, 'kernel_size':(3,3), 'strides':1, 'padding':'same', 'activation':'relu'},
{'type':'Conv2D', 'filters':64, 'kernel_size':(3,3), 'strides':2, 'padding':'same', 'activation':'relu'},
{'type':'Conv2D', 'filters':64, 'kernel_size':(3,3), 'strides':2, 'padding':'same', 'activation':'relu'},
{'type':'Conv2D', 'filters':64, 'kernel_size':(3,3), 'strides':1, 'padding':'same', 'activation':'relu'}
]
decoder= [ {'type':'Conv2DT', 'filters':64, 'kernel_size':(3,3), 'strides':1, 'padding':'same', 'activation':'relu'},
{'type':'Conv2DT', 'filters':64, 'kernel_size':(3,3), 'strides':2, 'padding':'same', 'activation':'relu'},
{'type':'Conv2DT', 'filters':32, 'kernel_size':(3,3), 'strides':2, 'padding':'same', 'activation':'relu'},
{'type':'Conv2DT', 'filters':1, 'kernel_size':(3,3), 'strides':1, 'padding':'same', 'activation':'sigmoid'}
]
vae = models.VAE.VariationalAutoencoder(input_shape, encoder, decoder, z_dim)
```
%% Cell type:code id: tags:
``` python
vae.encoder.summary()
```
%% Output
Model: "model_29"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
encoder_input (InputLayer) [(None, 28, 28, 1)] 0
__________________________________________________________________________________________________
Layer_1 (Conv2D) (None, 28, 28, 32) 320 encoder_input[0][0]
__________________________________________________________________________________________________
Layer_2 (Conv2D) (None, 14, 14, 64) 18496 Layer_1[0][0]
__________________________________________________________________________________________________
Layer_3 (Conv2D) (None, 7, 7, 64) 36928 Layer_2[0][0]
__________________________________________________________________________________________________
Layer_4 (Conv2D) (None, 7, 7, 64) 36928 Layer_3[0][0]
__________________________________________________________________________________________________
flatten_7 (Flatten) (None, 3136) 0 Layer_4[0][0]
__________________________________________________________________________________________________
mu (Dense) (None, 2) 6274 flatten_7[0][0]
__________________________________________________________________________________________________
log_var (Dense) (None, 2) 6274 flatten_7[0][0]
__________________________________________________________________________________________________
encoder_output (Lambda) (None, 2) 0 mu[0][0]
log_var[0][0]
==================================================================================================
Total params: 105,220
Trainable params: 105,220
Non-trainable params: 0
__________________________________________________________________________________________________
%% Cell type:code id: tags:
``` python
vae.decoder.summary()
```
%% Output
Model: "model_30"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
decoder_input (InputLayer) [(None, 2)] 0
_________________________________________________________________
dense_7 (Dense) (None, 3136) 9408
_________________________________________________________________
reshape_7 (Reshape) (None, 7, 7, 64) 0
_________________________________________________________________
Layer_1 (Conv2DTranspose) (None, 7, 7, 64) 36928
_________________________________________________________________
Layer_2 (Conv2DTranspose) (None, 14, 14, 64) 36928
_________________________________________________________________
Layer_3 (Conv2DTranspose) (None, 28, 28, 32) 18464
_________________________________________________________________
Layer_4 (Conv2DTranspose) (None, 28, 28, 1) 289
=================================================================
Total params: 102,017
Trainable params: 102,017
Non-trainable params: 0
_________________________________________________________________
%% Cell type:code id: tags:
``` python
a={'a':1, 'b':2}
del a['a']
a
```
%% Output
{'b': 2}
%% Cell type:code id: tags:
``` python
```
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Input, Conv2D, Flatten, Dense, Conv2DTranspose, Reshape, Lambda, Activation, BatchNormalization, LeakyReLU, Dropout
from tensorflow.keras.models import Model
import tensorflow.keras.datasets.imdb as imdb
class VariationalAutoencoder():
def __init__(self, input_shape, encoder_layers, decoder_layers, z_dim):
self.name = 'Variational AutoEncoder'
self.input_shape = input_shape
self.encoder_layers = encoder_layers
self.decoder_layers = decoder_layers
self.z_dim = z_dim
# ==== Encoder ================================================================
# ---- Input layer
encoder_input = Input(shape=self.input_shape, name='encoder_input')
x = encoder_input
# ---- Add next layers
i=1
for params in encoder_layers:
t=params['type']
params.pop('type')
if t=='Conv2D':
layer = Conv2D(**params, name=f"Layer_{i}")
if t=='Dropout':
layer = Dropout(**params)
x = layer(x)
i+=1
# ---- Flatten
shape_before_flattening = K.int_shape(x)[1:]
x = Flatten()(x)
# ---- mu / log_var
self.mu = Dense(self.z_dim, name='mu')(x)
self.log_var = Dense(self.z_dim, name='log_var')(x)
self.encoder_mu_log_var = Model(encoder_input, (self.mu, self.log_var))
# ---- output layer
def sampling(args):
mu, log_var = args
epsilon = K.random_normal(shape=K.shape(mu), mean=0., stddev=1.)
return mu + K.exp(log_var / 2) * epsilon
encoder_output = Lambda(sampling, name='encoder_output')([self.mu, self.log_var])
self.encoder = Model(encoder_input, encoder_output)
# ==== Decoder ================================================================
# ---- Input layer
decoder_input = Input(shape=(self.z_dim,), name='decoder_input')
# ---- First dense layer
x = Dense(np.prod(shape_before_flattening))(decoder_input)
x = Reshape(shape_before_flattening)(x)
# ---- Add next layers
i=1
for params in decoder_layers:
t=params['type']
params.pop('type')
if t=='Conv2DT':
layer = Conv2DTranspose(**params, name=f"Layer_{i}")
if t=='Dropout':
layer = Dropout(**params)
x = layer(x)
i+=1
decoder_output = x
self.decoder = Model(decoder_input, decoder_output)
# ==== Encoder-Decoder ========================================================
model_input = encoder_input
model_output = self.decoder(encoder_output)
self.model = Model(model_input, model_output)
def compile(self, learning_rate, r_loss_factor):
self.learning_rate = learning_rate
self.r_loss_factor = r_loss_factor
def vae_r_loss(y_true, y_pred):
r_loss = K.mean(K.square(y_true - y_pred), axis = [1,2,3])
return r_loss_factor * r_loss
def vae_kl_loss(y_true, y_pred):
kl_loss = -0.5 * K.sum(1 + self.log_var - K.square(self.mu) - K.exp(self.log_var), axis = 1)
return kl_loss
def vae_loss(y_true, y_pred):
r_loss = vae_r_loss(y_true, y_pred)
kl_loss = vae_kl_loss(y_true, y_pred)
return r_loss + kl_loss
optimizer = Adam(lr=learning_rate)
self.model.compile(optimizer=optimizer, loss = vae_loss, metrics = [vae_r_loss, vae_kl_loss], experimental_run_tf_function=False)
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [K3WINE1] - Wine quality prediction with a Dense Network (DNN)
<!-- DESC --> Another example of regression, with a wine quality prediction, using Keras 3 and PyTorch
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- Predict the **quality of wines**, based on their analysis
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The **[Wine Quality datasets](https://archive.ics.uci.edu/ml/datasets/wine+Quality)** are made up of analyses of a large number of wines, with an associated quality (between 0 and 10)
This dataset is provide by :
Paulo Cortez, University of Minho, Guimarães, Portugal, http://www3.dsi.uminho.pt/pcortez
A. Cerdeira, F. Almeida, T. Matos and J. Reis, Viticulture Commission of the Vinho Verde Region(CVRVV), Porto, Portugal, @2009
This dataset can be retreive at [University of California Irvine (UCI)](https://archive.ics.uci.edu/dataset/186/wine+quality)
Due to privacy and logistic issues, only physicochemical and sensory variables are available
There is no data about grape types, wine brand, wine selling price, etc.
- fixed acidity
- volatile acidity
- citric acid
- residual sugar
- chlorides
- free sulfur dioxide
- total sulfur dioxide
- density
- pH
- sulphates
- alcohol
- quality (score between 0 and 10)
## What we're going to do :
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
import numpy as np
import pandas as pd
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3WINE1')
```
%% Cell type:markdown id: tags:
Verbosity during training :
- 0 = silent
- 1 = progress bar
- 2 = one line per epoch
%% Cell type:code id: tags:
``` python
fit_verbosity = 1
dataset_name = 'winequality-red.csv'
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('fit_verbosity', 'dataset_name')
```
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
%% Cell type:code id: tags:
``` python
data = pd.read_csv(f'{datasets_dir}/WineQuality/origine/{dataset_name}', header=0,sep=';')
display(data.head(5).style.format("{0:.2f}"))
print('Missing Data : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
```
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
### 3.1 - Split data
We will use 80% of the data for training and 20% for validation.
x will be the data of the analysis and y the quality
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data = data.sample(frac=1., axis=0) # Shuffle
data_train = data.sample(frac=0.8, axis=0) # get 80 %
data_test = data.drop(data_train.index) # test = all - train
# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('quality', axis=1)
y_train = data_train['quality']
x_test = data_test.drop('quality', axis=1)
y_test = data_test['quality']
print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Cell type:markdown id: tags:
### 3.2 - Data normalization
**Note :**
- All input data must be normalized, train and test.
- To do this we will subtract the mean and divide by the standard deviation.
- But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data.
%% Cell type:code id: tags:
``` python
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
# Convert ou DataFrame to numpy array
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
```
%% Cell type:markdown id: tags:
## Step 4 - Build a model
More informations about :
- [Optimizer](https://keras.io/api/optimizers)
- [Activation](https://keras.io/api/layers/activations)
- [Loss](https://keras.io/api/losses)
- [Metrics](https://keras.io/api/metrics)
%% Cell type:code id: tags:
``` python
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop',
loss = 'mse',
metrics = ['mae', 'mse'] )
return model
```
%% Cell type:markdown id: tags:
## 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
model=get_model_v1( (11,) )
model.summary()
```
%% Cell type:markdown id: tags:
### 5.2 - Add callback
%% Cell type:code id: tags:
``` python
os.makedirs('./run/models', mode=0o750, exist_ok=True)
save_dir = "./run/models/best_model.keras"
savemodel_callback = keras.callbacks.ModelCheckpoint( filepath=save_dir, monitor='val_mae', mode='max', save_best_only=True)
```
%% Cell type:markdown id: tags:
### 5.3 - Train it
%% Cell type:code id: tags:
``` python
history = model.fit(x_train,
y_train,
epochs = 100,
batch_size = 10,
verbose = fit_verbosity,
validation_data = (x_test, y_test),
callbacks = [savemodel_callback])
```
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Cell type:markdown id: tags:
### 6.2 - Training history
What was the best result during our training ?
%% Cell type:code id: tags:
``` python
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
```
%% Cell type:code id: tags:
``` python
fidle.scrawler.history( history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']}, save_as='01-history')
```
%% Cell type:markdown id: tags:
## Step 7 - Restore a model :
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
loaded_model = keras.models.load_model('./run/models/best_model.keras')
loaded_model.summary()
print("Loaded.")
```
%% Cell type:markdown id: tags:
### 7.2 - Evaluate it :
%% Cell type:code id: tags:
``` python
score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Cell type:markdown id: tags:
### 7.3 - Make a prediction
%% Cell type:code id: tags:
``` python
# ---- Pick n entries from our test set
n = 200
ii = np.random.randint(1,len(x_test),n)
x_sample = x_test[ii]
y_sample = y_test[ii]
```
%% Cell type:code id: tags:
``` python
# ---- Make a predictions
y_pred = loaded_model.predict( x_sample, verbose=2 )
```
%% Cell type:code id: tags:
``` python
# ---- Show it
print('Wine Prediction Real Delta')
for i in range(n):
pred = y_pred[i][0]
real = y_sample[i]
delta = real-pred
print(f'{i:03d} {pred:.2f} {real} {delta:+.2f} ')
```
%% Cell type:markdown id: tags:
### Few questions :
- Can this model be used for red wines from Bordeaux and/or Beaujolais?
- What are the limitations of this model?
- What are the limitations of this dataset?
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/header.svg"></img>
# <!-- TITLE --> [LWINE1] - Wine quality prediction with a Dense Network (DNN)
<!-- DESC --> Another example of regression, with a wine quality prediction, using PyTorch Lightning
<!-- AUTHOR : Achille Mbogol Touye (EFFILIA-MIAI/SIMaP) -->
## Objectives :
- Predict the **quality of wines**, based on their analysis
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The **[Wine Quality datasets](https://archive.ics.uci.edu/ml/datasets/wine+Quality)** are made up of analyses of a large number of wines, with an associated quality (between 0 and 10)
This dataset is provide by :
Paulo Cortez, University of Minho, Guimarães, Portugal, http://www3.dsi.uminho.pt/pcortez
A. Cerdeira, F. Almeida, T. Matos and J. Reis, Viticulture Commission of the Vinho Verde Region(CVRVV), Porto, Portugal, @2009
This dataset can be retreive at [University of California Irvine (UCI)](https://archive-beta.ics.uci.edu/ml/datasets/wine+quality)
Due to privacy and logistic issues, only physicochemical and sensory variables are available
There is no data about grape types, wine brand, wine selling price, etc.
- fixed acidity
- volatile acidity
- citric acid
- residual sugar
- chlorides
- free sulfur dioxide
- total sulfur dioxide
- density
- pH
- sulphates
- alcohol
- quality (score between 0 and 10)
## What we're going to do :
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
# Import some packages
import os
import sys
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import lightning.pytorch as pl
import torch.nn.functional as F
import torchvision.transforms as T
from importlib import reload
from IPython.display import Markdown
from torch.utils.data import Dataset, DataLoader, random_split
from modules.progressbar import CustomTrainProgressBar
from modules.data_load import WineQualityDataset, Normalize, ToTensor
from lightning.pytorch.loggers.tensorboard import TensorBoardLogger
from torchmetrics.functional.regression import mean_absolute_error, mean_squared_error
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('LWINE1')
```
%% Cell type:markdown id: tags:
Verbosity during training :
- 0 = silent
- 1 = progress bar
- 2 = one line per epoch
%% Cell type:code id: tags:
``` python
fit_verbosity = 1
dataset_name = 'winequality-red.csv'
```
%% Cell type:markdown id: tags:
Override parameters (batch mode) - Just forget this cell
%% Cell type:code id: tags:
``` python
fidle.override('fit_verbosity', 'dataset_name')
```
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
%% Cell type:code id: tags:
``` python
csv_file_path=f'{datasets_dir}/WineQuality/origine/{dataset_name}'
datasets=WineQualityDataset(csv_file_path)
display(datasets.data.head(5).style.format("{0:.2f}"))
print('Missing Data : ',datasets.data.isna().sum().sum(), ' Shape is : ', datasets.data.shape)
```
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
%% Cell type:markdown id: tags:
### 3.1 - Data normalization
**Note :**
- All input features must be normalized.
- To do this we will subtract the mean and divide by the standard deviation for each input features.
- Then we convert numpy array features and target **(quality)** to torch tensor
%% Cell type:code id: tags:
``` python
transforms=T.Compose([Normalize(csv_file_path), ToTensor()])
dataset=WineQualityDataset(csv_file_path,transform=transforms)
```
%% Cell type:code id: tags:
``` python
display(Markdown("before normalization :"))
display(datasets[:]["features"])
print()
display(Markdown("After normalization :"))
display(dataset[:]["features"])
```
%% Cell type:markdown id: tags:
### 3.2 - Split data
We will use 80% of the data for training and 20% for validation.
x will be the features data of the analysis and y the target (quality)
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data_train_len = int(len(dataset)*0.8) # get 80 %
data_test_len = len(dataset) -data_train_len # test = all - train
# ---- Split => x,y with random_split
#
data_train_subset, data_test_subset=random_split(dataset, [data_train_len, data_test_len])
x_train = data_train_subset[:]["features"]
y_train = data_train_subset[:]["quality" ]
x_test = data_test_subset [:]["features"]
y_test = data_test_subset [:]["quality" ]
print('Original data shape was : ',dataset.data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Cell type:markdown id: tags:
### 3.3 - For Training model use Dataloader
The Dataset retrieves our dataset’s features and labels one sample at a time. While training a model, we typically want to pass samples in minibatches, reshuffle the data at every epoch to reduce model overfitting. DataLoader is an iterable that abstracts this complexity for us in an easy API.
%% Cell type:code id: tags:
``` python
# train bacth data
train_loader= DataLoader(
dataset=data_train_subset,
shuffle=True,
batch_size=20,
num_workers=2
)
# test bacth data
test_loader= DataLoader(
dataset=data_test_subset,
shuffle=False,
batch_size=20,
num_workers=2
)
```
%% Cell type:markdown id: tags:
## Step 4 - Build a model
More informations about :
- [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers)
- [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
- [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
- [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)
%% Cell type:code id: tags:
``` python
class LitRegression(pl.LightningModule):
def __init__(self,in_features=11):
super().__init__()
self.model = nn.Sequential(
nn.Linear(in_features, 128), # hidden layer 1
nn.ReLU(), # activation function
nn.Linear(128, 128), # hidden layer 2
nn.ReLU(), # activation function
nn.Linear(128, 1)) # output layer
def forward(self, x): # forward pass
x = self.model(x)
return x
# optimizer
def configure_optimizers(self):
optimizer = torch.optim.RMSprop(self.parameters(),lr=1e-4)
return optimizer
def training_step(self, batch, batch_idx):
# defines the train loop.
x_features, y_target = batch["features"],batch["quality"]
# forward pass
y_pred = self.model(x_features)
# loss function MSE
loss = F.mse_loss(y_pred, y_target)
# metrics mae
mae = mean_absolute_error(y_pred,y_target)
# metrics mse
mse = mean_squared_error(y_pred,y_target)
metrics= {"train_loss": loss,
"train_mae" : mae,
"train_mse" : mse
}
# logs metrics for each training_step
self.log_dict(metrics,
on_step = False,
on_epoch = True,
logger = True,
prog_bar = True,
)
return loss
def validation_step(self, batch, batch_idx):
# defines the val loop.
x_features, y_target = batch["features"],batch["quality"]
# forward pass
y_pred = self.model(x_features)
# loss function MSE
loss = F.mse_loss(y_pred, y_target)
# metrics
mae = mean_absolute_error(y_pred,y_target)
# metrics
mse = mean_squared_error(y_pred,y_target)
metrics= {"val_loss": loss,
"val_mae" : mae,
"val_mse" : mse
}
# logs metrics for each validation_step
self.log_dict(metrics,
on_step = False,
on_epoch = True,
logger = True,
prog_bar = True,
)
return metrics
```
%% Cell type:markdown id: tags:
## 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
reg=LitRegression(in_features=11)
print(reg)
```
%% Cell type:markdown id: tags:
### 5.2 - Add callback
%% Cell type:code id: tags:
``` python
os.makedirs('./run/models', exist_ok=True)
save_dir = "./run/models/"
filename ='best-model-{epoch}-{val_loss:.2f}'
savemodel_callback = pl.callbacks.ModelCheckpoint(dirpath=save_dir,
filename=filename,
save_top_k=1,
verbose=False,
monitor="val_loss"
)
```
%% Cell type:markdown id: tags:
### 5.3 - Train it
%% Cell type:code id: tags:
``` python
# loggers data
os.makedirs(f'{run_dir}/logs', mode=0o750, exist_ok=True)
logger= TensorBoardLogger(save_dir=f'{run_dir}/logs',name="reg_logs")
```
%% Cell type:code id: tags:
``` python
# train model
trainer = pl.Trainer(accelerator='auto',
max_epochs=100,
logger=logger,
num_sanity_val_steps=0,
callbacks=[savemodel_callback,CustomTrainProgressBar()])
trainer.fit(model=reg, train_dataloaders=train_loader, val_dataloaders=test_loader)
```
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score=trainer.validate(model=reg, dataloaders=test_loader, verbose=False)
print('x_test / loss : {:5.4f}'.format(score[0]['val_loss']))
print('x_test / mae : {:5.4f}'.format(score[0]['val_mae']))
print('x_test / mse : {:5.4f}'.format(score[0]['val_mse']))
```
%% Cell type:markdown id: tags:
### 6.2 - Training history
To access logs with tensorboad :
- Under **Docker**, from a terminal launched via the jupyterlab launcher, use the following command:<br>
```tensorboard --logdir <path-to-logs> --host 0.0.0.0```
- If you're **not using Docker**, from a terminal :<br>
```tensorboard --logdir <path-to-logs>```
**Note:** One tensorboard instance can be used simultaneously.
%% Cell type:markdown id: tags:
## Step 7 - Restore a model :
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
# Load the model from a checkpoint
loaded_model = LitRegression.load_from_checkpoint(savemodel_callback.best_model_path)
print("Loaded:")
print(loaded_model)
```
%% Cell type:markdown id: tags:
### 7.2 - Evaluate it :
%% Cell type:code id: tags:
``` python
score=trainer.validate(model=loaded_model, dataloaders=test_loader, verbose=False)
print('x_test / loss : {:5.4f}'.format(score[0]['val_loss']))
print('x_test / mae : {:5.4f}'.format(score[0]['val_mae']))
print('x_test / mse : {:5.4f}'.format(score[0]['val_mse']))
```
%% Cell type:markdown id: tags:
### 7.3 - Make a prediction
%% Cell type:code id: tags:
``` python
# ---- Pick n entries from our test set
n = 200
ii = np.random.randint(1,len(x_test),n)
x_sample = x_test[ii]
y_sample = y_test[ii]
```
%% Cell type:code id: tags:
``` python
# ---- Make a predictions :
# Sets the model in evaluation mode.
loaded_model.eval()
# Perform inference using the loaded model
y_pred = loaded_model( x_sample )
```
%% Cell type:code id: tags:
``` python
# ---- Show it
print('Wine Prediction Real Delta')
for i in range(n):
pred = y_pred[i][0].item()
real = y_sample[i][0].item()
delta = real-pred
print(f'{i:03d} {pred:.2f} {real} {delta:+.2f} ')
```
%% Cell type:code id: tags:
``` python
fidle.end()
```
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/logo-paysage.svg"></img>
%% Cell type:code id: tags:
``` python
```
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___|
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/SARI/DEVLOG 2023
# ------------------------------------------------------------------
# 2.0 version by Achille Mbogol Touye (EFELIA-MIAI/SIMAP¨), sep 2023
import torch
import pandas as pd
import lightning.pytorch as pl
class WineQualityDataset(pl.LightningDataModule):
"""Wine Quality dataset."""
def __init__(self, csv_file, transform=None):
"""
Args:
csv_file (string): Path to the csv file.
transform (callable, optional): Optional transform to be applied on a sample.
"""
super().__init__()
self.csv_file=csv_file
self.data = pd.read_csv(self.csv_file, header=0, sep=';')
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
features = self.data.iloc[idx, :-1].values.astype('float32')
target = self.data.iloc[idx, -1:].values.astype('float32')
sample = {'features':features, 'quality':target}
if self.transform:
sample = self.transform(sample)
return sample
class Normalize(WineQualityDataset):
"""normalize data"""
def __init__(self, csv_file):
mean,std=self.compute_mean_and_std(csv_file)
self.mean=mean
self.std=std
def compute_mean_and_std(self, csv_file):
"""Compute the mean and std for each feature."""
dataset= WineQualityDataset(csv_file)
mean = dataset.data.iloc[:,:-1].mean(axis=0).values.astype('float32')
std = dataset.data.iloc[:,:-1].std(axis=0).values.astype('float32')
return mean,std
def __call__(self, sample):
features, target = sample['features'],sample['quality']
norm_features = (features - self.mean) / self.std # normalize features
return {'features':norm_features,
'quality':target
}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
features, target = sample['features'], sample['quality']
return {'features': torch.from_numpy(features),
'quality' : torch.from_numpy(target)
}
# ------------------------------------------------------------------
# _____ _ _ _
# | ___(_) __| | | ___
# | |_ | |/ _` | |/ _ \
# | _| | | (_| | | __/
# |_| |_|\__,_|_|\___|
# ------------------------------------------------------------------
# Formation Introduction au Deep Learning (FIDLE)
# CNRS/SARI/DEVLOG 2023
# ------------------------------------------------------------------
# 2.0 version by Achille Mbogol Touye (EFELIA-MIAI/SIMAP¨), sep 2023
from tqdm import tqdm as _tqdm
from lightning.pytorch.callbacks import TQDMProgressBar
# Créez un callback de barre de progression pour afficher les métriques d'entraînement
class CustomTrainProgressBar(TQDMProgressBar):
def __init__(self):
super().__init__()
self._val_progress_bar = _tqdm()
def init_train_tqdm(self):
bar=super().init_train_tqdm()
bar.set_description("Training")
return bar
@property
def val_progress_bar(self):
if self._val_progress_bar is None:
raise ValueError("The `_val_progress_bar` reference has not been set yet.")
return self._val_progress_bar
def on_validation_start(self, trainer, pl_module):
# Désactivez l'affichage de la barre de progression de validation
self.val_progress_bar.disable = True
\ No newline at end of file
# base image
ARG PYTHON_VERSION=3.9
ARG docker_image_base=python:${PYTHON_VERSION}-slim
FROM ${docker_image_base}
# maintainers
LABEL maintainer1=soraya.arias@inria.fr maintainer2=jean-luc.parouty@simap.grenoble-inp.fr
ARG ARCH_VERSION=cpu
ARG BRANCH=pre-master
# Ensure a sane environment
ENV TZ=Europe/Paris LANG=C.UTF-8 LC_ALL=C.UTF-8 DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone && \
apt update --fix-missing && \
apt install -y --no-install-recommends apt-utils \
procps \
python3-venv \
python3-pip && \
apt -y dist-upgrade && \
apt clean && \
rm -fr /var/lib/apt/lists/*
# copy Python requirement packages list in docker image
COPY requirements-${ARCH_VERSION}.txt /root/requirements-${ARCH_VERSION}.txt
# Update Python tools and install requirements packages for Fidle
RUN python3 -m pip install --upgrade pip && \
pip3 install --no-cache-dir --upgrade -r /root/requirements-${ARCH_VERSION}.txt
# Install tensorboard & update jupyter
RUN pip3 install --no-cache-dir --upgrade tensorboard tensorboardX jupyter ipywidgets
# Move default logo python
RUN bin/rm /usr/local/share/jupyter/kernels/python3/logo*
# Change default logo and name kernels
COPY images/env-keras3.png /usr/local/share/jupyter/kernels/python3/logo-64x64.png
COPY images/env-keras3.svg /usr/local/share/jupyter/kernels/python3/logo-svg.svg
# Get Fidle datasets
RUN mkdir /data && \
fid install_datasets --quiet --install_dir /data
# Get Fidle notebooks and create link
RUN mkdir /notebooks/ && \
fid install_notebooks --notebooks fidle-${BRANCH} --quiet --install_dir /notebooks && \
ln -s $(ls -1td /notebooks/* | head -1) /notebooks/last
# Add Jupyter configuration (no browser, listen all interfaces, ...)
COPY jupyter_lab_config.py /root/.jupyter/jupyter_lab_config.py
COPY notebook.json /root/.jupyter/nbconfig/notebook.json
# Jupyter notebook uses 8888
EXPOSE 8888
# tensorboard uses 6006
EXPOSE 6006
VOLUME /notebooks
WORKDIR /notebooks
# Set Keras backend
ENV KERAS_BACKEND torch
# Set Python path to add fidle path
ENV PYTHONPATH=/notebooks/fidle-master/:$PYTHONPATH
# Set default shell (useful in the notebooks)
ENV SHELL=/bin/bash
# Set Fidle dataset directory variable
ENV FIDLE_DATASETS_DIR=/data/datasets-fidle
# Run a notebook by default
CMD ["jupyter", "lab"]
\ No newline at end of file
docker/images/env-keras3.png

2.91 KiB

<?xml version="1.0" encoding="UTF-8"?>
<svg id="Calque_2" data-name="Calque 2" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">
<defs>
<style>
.cls-1 {
fill: #d00000;
}
.cls-1, .cls-2, .cls-3, .cls-4, .cls-5 {
stroke-width: 0px;
}
.cls-2 {
fill: none;
}
.cls-3 {
fill: #fff;
}
.cls-4 {
fill: #e12229;
}
.cls-5 {
fill: #ee4c2c;
}
</style>
</defs>
<g id="Mode_Isolation" data-name="Mode Isolation">
<g>
<rect class="cls-3" width="100" height="100"/>
<g id="group">
<path id="Path" class="cls-5" d="M84.64,15.79l-3.09,3.09c5.06,5.06,5.06,13.21,0,18.17-5.06,5.06-13.21,5.06-18.17,0-5.06-5.06-5.06-13.21,0-18.17l8.01-8.01,1.12-1.12V3.7l-12.08,12.08c-6.75,6.75-6.75,17.61,0,24.36,6.75,6.75,17.61,6.75,24.22,0,6.75-6.79,6.75-17.61,0-24.36Z"/>
<path id="Path-1" class="cls-5" d="M80.85,12.79c0,1.24-1.01,2.25-2.25,2.25s-2.25-1.01-2.25-2.25,1.01-2.25,2.25-2.25,2.25,1.01,2.25,2.25Z"/>
</g>
<g>
<g>
<path class="cls-2" d="M52.97,86.43c-4.89,1.33-6.52,1.26-7.02,1.15.37-.75,2.11-2.39,3.93-3.69.43-.31.54-.91.24-1.35-.3-.44-.89-.55-1.33-.24-2.58,1.83-5.48,4.39-4.67,6.16.31.67.95,1.12,2.5,1.12,1.4,0,3.55-.37,6.85-1.27.51-.14.81-.67.67-1.19-.13-.52-.66-.83-1.17-.69Z"/>
<g>
<path class="cls-4" d="M68.15,44.5c-.34,0-.63-.17-.87-.5-.3-.42-.64-.57-1.3-.57-.2,0-.4.01-.59.03-.22.01-.42.03-.62.03-.32,0-.79-.03-1.23-.27-1.36-.77-1.86-2.52-1.11-3.9.5-.92,1.46-1.5,2.49-1.5.48,0,.96.12,1.38.36,1.06.59,2.99,4.78,2.77,5.62l-.18.7-.74.02Z"/>
<path class="cls-3" d="M64.93,38.75c.31,0,.63.08.92.24.85.48,2.51,4.58,2.3,4.58-.02,0-.05-.03-.1-.11-.58-.82-1.33-.97-2.06-.97-.43,0-.84.05-1.21.05-.29,0-.56-.03-.77-.15-.92-.52-1.26-1.7-.75-2.64.35-.64,1-1.01,1.67-1.01M64.93,36.87c-1.38,0-2.66.76-3.32,1.99-.99,1.83-.33,4.15,1.48,5.16.62.35,1.26.39,1.68.39.21,0,.44-.01.68-.03.17-.01.35-.02.53-.02.41,0,.45.05.53.17.55.79,1.26.9,1.64.9h1.45l.38-1.41c.11-.43.24-.93-.94-3.48-1.06-2.29-1.74-2.9-2.27-3.2-.56-.32-1.2-.48-1.84-.48h0Z"/>
</g>
<path class="cls-4" d="M62.06,75.3c-.39-.47-.34-1.18.12-1.58.46-.4,1.16-.35,1.55.13,5.79,6.92,15.18,8.77,24.52,4.83.95-2.66,1.42-5.45,1.49-8.18,0-7.41-3.53-14.26-9.52-18.38-2.78-1.91-9.2-4.45-17.62-3.04-6.19,1.04-12.61,5.82-15.12,7.97-1.51,1.29-19.5,18.68-27,15.22-5.07-2.35,3.99-10.88-.17-18.68-.11-.21-.41-.23-.55-.04-2.12,2.91-4.18,6.41-7,4.84-1.26-.7-2.39-2.94-3.26-4.36-.18-.28-.61-.14-.6.19.32,9.8,4.97,17.01,8.71,21.57,6.47,7.9,17.8,17.09,36.12,18.95,18.88,1.75,28.93-4.73,33.3-13.21-2.84.96-5.67,1.44-8.4,1.44-6.45,0-12.34-2.63-16.56-7.67ZM53.46,88.31c-3.3.9-5.45,1.27-6.85,1.27-1.55,0-2.19-.45-2.5-1.12-.81-1.77,2.1-4.32,4.67-6.16.43-.3,1.03-.2,1.33.24.3.44.19,1.05-.24,1.35-1.83,1.3-3.56,2.94-3.93,3.69.5.11,2.14.18,7.02-1.15.51-.14,1.03.17,1.17.69.14.52-.16,1.05-.67,1.19Z"/>
<g>
<path class="cls-4" d="M70.65,47.4c-.36,0-.83-.21-1-.82-.32-1.15.43-5.99,2.83-7.43.42-.25.9-.39,1.39-.39,1.04,0,2,.58,2.5,1.51.75,1.38.25,3.13-1.11,3.9-.15.09-.33.18-.53.28-.93.49-2.34,1.22-3.2,2.45-.3.42-.68.49-.88.49h0Z"/>
<path class="cls-3" d="M73.88,39.71c.67,0,1.33.38,1.67,1.02.51.94.17,2.12-.75,2.64s-2.86,1.33-4.04,3.01c-.04.06-.08.09-.11.09-.43,0,.1-5.18,2.31-6.51.29-.17.6-.25.91-.25M73.88,37.83c-.66,0-1.31.18-1.88.52-2.91,1.74-3.65,7.04-3.25,8.48.25.9,1.01,1.5,1.9,1.5.65,0,1.25-.32,1.64-.89.73-1.04,1.97-1.69,2.87-2.16.21-.11.39-.21.55-.29,1.81-1.02,2.47-3.33,1.48-5.17-.67-1.23-1.94-2-3.32-2h0Z"/>
</g>
<g>
<path class="cls-4" d="M70.32,38.97c-.19,0-.68-.07-.96-.67-.34-.73-.85-3.85.48-5.42.42-.5,1.03-.78,1.67-.78.54,0,1.05.2,1.44.56.86.8.91,2.2.09,3.11-.08.09-.17.19-.28.3-.48.5-1.19,1.26-1.48,2.17-.17.54-.62.73-.96.73h0Z"/>
<path class="cls-3" d="M71.52,33.04c.29,0,.58.1.8.31.49.46.51,1.26.03,1.8s-1.54,1.5-1.95,2.81c-.02.06-.04.08-.06.08-.28,0-.88-3.23.23-4.55.25-.3.61-.45.96-.45M71.52,31.16c-.92,0-1.79.41-2.39,1.11-1.6,1.89-1.08,5.4-.61,6.42.52,1.13,1.52,1.22,1.81,1.22.85,0,1.58-.54,1.85-1.39.22-.7.83-1.34,1.27-1.81.11-.12.21-.23.3-.32,1.15-1.29,1.08-3.27-.15-4.42-.56-.52-1.3-.81-2.07-.81h0Z"/>
</g>
</g>
<g>
<ellipse class="cls-3" cx="75.51" cy="68.45" rx="3.52" ry="3.88"/>
<ellipse class="cls-4" cx="76.93" cy="69.31" rx="2.38" ry="2.42"/>
</g>
</g>
<g>
<path class="cls-3" d="M43.24,43.2s0,0,0,0H11.89s0,0,0,0V11.85s0,0,0,0h31.35s0,0,0,0v31.35h0Z"/>
<path class="cls-1" d="M42.72,42.68s0,0,0,0H12.41s0,0,0,0V12.37s0,0,0,0h30.31s0,0,0,0v30.31h0Z"/>
<path class="cls-3" d="M20.68,35.76s.01.05.03.07l.52.52s.05.03.07.03h1.78s.05-.01.07-.03l.52-.52s.03-.05.03-.07v-5.63s.01-.05.03-.07l2.26-2.15s.04-.01.05,0l5.7,8.44s.04.03.06.03h2.52s.05-.02.06-.04l.46-.88s0-.05,0-.07l-6.67-9.66s-.01-.05,0-.06l6.13-6.1s.03-.05.03-.07v-.11s0-.06-.02-.08l-.35-.81s-.04-.04-.06-.04h-2.49s-.05.01-.07.03l-7.62,7.64s-.03.01-.03-.01v-7.01s-.01-.06-.03-.07l-.51-.55s-.05-.03-.07-.03h-1.79s-.05.01-.07.03l-.52.56s-.03.05-.03.07v16.65h0Z"/>
</g>
</g>
</g>
</svg>
\ No newline at end of file