{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<img width=\"800px\" src=\"../fidle/img/header.svg\"></img>\n", "\n", "# <!-- TITLE --> [K3AE2] - Building and training an AE denoiser model\n", "<!-- DESC --> Episode 1 : Construction of a denoising autoencoder and training of it with a noisy MNIST dataset.\n", "\n", "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n", "\n", "## Objectives :\n", " - Understanding and implementing a denoizing **autoencoder** neurals network (AE)\n", " - First overview or example of Keras procedural syntax\n", "\n", "The calculation needs being important, it is preferable to use a very simple dataset such as MNIST. \n", "The use of a GPU is often indispensable.\n", "\n", "## What we're going to do :\n", "\n", " - Defining an AE model\n", " - Build the model\n", " - Train it\n", " - Follow the learning process with Tensorboard\n", " \n", "## Data Terminology :\n", "- `clean_train`, `clean_test` for noiseless images \n", "- `noisy_train`, `noisy_test` for noisy images\n", "- `denoised_test` for denoised images at the output of the model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1 - Init python stuff\n", "### 1.1 - Init" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "os.environ['KERAS_BACKEND'] = 'torch'\n", "\n", "import keras\n", "\n", "import numpy as np\n", "from skimage import io\n", "import random\n", "\n", "from keras import layers\n", "from keras.callbacks import ModelCheckpoint, TensorBoard\n", "\n", "import os\n", "from importlib import reload\n", "\n", "from modules.MNIST import MNIST\n", "from modules.ImagesCallback import ImagesCallback\n", "\n", "import fidle\n", "\n", "# Init Fidle environment\n", "run_id, run_dir, datasets_dir = fidle.init('K3AE2')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1.2 - Parameters\n", "`prepared_dataset` : Filename of the prepared dataset (Need 400 Mo, but can be in ./data) \n", "`dataset_seed` : Random seed for shuffling dataset \n", "`scale` : % of the dataset to use (1. for 100%) \n", "`latent_dim` : Dimension of the latent space \n", "`train_prop` : Percentage for train (the rest being for the test)\n", "`batch_size` : Batch size \n", "`epochs` : Nb of epochs for training\\\n", "`fit_verbosity` is the verbosity during training : 0 = silent, 1 = progress bar, 2 = one line per epoch\n", "\n", "Note : scale=.2, epoch=20 => 3'30s on a laptop" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prepared_dataset = './data/mnist-noisy.h5'\n", "dataset_seed = 123\n", "\n", "scale = .1\n", "\n", "latent_dim = 10\n", "\n", "train_prop = .8\n", "batch_size = 128\n", "epochs = 20\n", "fit_verbosity = 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Override parameters (batch mode) - Just forget this cell" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fidle.override('prepared_dataset', 'dataset_seed', 'scale', 'latent_dim')\n", "fidle.override('train_prop', 'batch_size', 'epochs', 'fit_verbosity')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2 - Retrieve dataset\n", "With our MNIST class, in one call, we can reload, rescale, shuffle and split our previously saved dataset :-)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clean_train,clean_test, noisy_train,noisy_test, _,_ = MNIST.reload_prepared_dataset(scale = scale, \n", " train_prop = train_prop,\n", " seed = dataset_seed,\n", " shuffle = True,\n", " filename=prepared_dataset )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3 - Build models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Encoder" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "inputs = keras.Input(shape=(28, 28, 1))\n", "x = layers.Conv2D(32, 3, activation=\"relu\", strides=2, padding=\"same\")(inputs)\n", "x = layers.Conv2D(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\n", "x = layers.Flatten()(x)\n", "x = layers.Dense(16, activation=\"relu\")(x)\n", "z = layers.Dense(latent_dim)(x)\n", "\n", "encoder = keras.Model(inputs, z, name=\"encoder\")\n", "# encoder.summary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Decoder" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "inputs = keras.Input(shape=(latent_dim,))\n", "x = layers.Dense(7 * 7 * 64, activation=\"relu\")(inputs)\n", "x = layers.Reshape((7, 7, 64))(x)\n", "x = layers.Conv2DTranspose(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\n", "x = layers.Conv2DTranspose(32, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\n", "outputs = layers.Conv2DTranspose(1, 3, activation=\"sigmoid\", padding=\"same\")(x)\n", "\n", "decoder = keras.Model(inputs, outputs, name=\"decoder\")\n", "# decoder.summary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### AE\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "inputs = keras.Input(shape=(28, 28, 1))\n", "\n", "latents = encoder(inputs)\n", "outputs = decoder(latents)\n", "\n", "ae = keras.Model(inputs,outputs, name=\"ae\")\n", "\n", "ae.compile(optimizer=keras.optimizers.Adam(), loss='binary_crossentropy')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 4 - Train\n", "20' on a CPU \n", "1'12 on a GPU (V100, IDRIS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ---- Callback : Images\n", "#\n", "fidle.utils.mkdir( run_dir + '/images')\n", "filename = run_dir + '/images/image-{epoch:03d}-{i:02d}.jpg'\n", "callback_images = ImagesCallback(filename, x=clean_test[:5], encoder=encoder,decoder=decoder)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "chrono = fidle.Chrono()\n", "chrono.start()\n", "\n", "history = ae.fit(noisy_train, clean_train,\n", " batch_size = batch_size,\n", " epochs = epochs,\n", " verbose = fit_verbosity,\n", " validation_data = (noisy_test, clean_test),\n", " callbacks = [ callback_images ] )\n", "\n", "chrono.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Save model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "os.makedirs(f'{run_dir}/models', exist_ok=True)\n", "\n", "encoder.save(f'{run_dir}/models/encoder.keras')\n", "decoder.save(f'{run_dir}/models/decoder.keras')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 5 - History" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fidle.scrawler.history(history, plot={'loss':['loss','val_loss']}, save_as='01-history')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 6 - Denoising progress" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "imgs=[]\n", "for epoch in range(0,epochs,2):\n", " for i in range(5):\n", " filename = run_dir + '/images/image-{epoch:03d}-{i:02d}.jpg'.format(epoch=epoch, i=i)\n", " img = io.imread(filename)\n", " imgs.append(img) \n", "\n", "fidle.utils.subtitle('Real images (clean_test) :')\n", "fidle.scrawler.images(clean_test[:5], None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, save_as='02-original-real')\n", "\n", "fidle.utils.subtitle('Noisy images (noisy_test) :')\n", "fidle.scrawler.images(noisy_test[:5], None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, save_as='03-original-noisy')\n", "\n", "fidle.utils.subtitle('Evolution during the training period (denoised_test) :')\n", "fidle.scrawler.images(imgs, None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, y_padding=0.1, save_as='04-learning')\n", "\n", "fidle.utils.subtitle('Noisy images (noisy_test) :')\n", "fidle.scrawler.images(noisy_test[:5], None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, save_as=None)\n", "\n", "fidle.utils.subtitle('Real images (clean_test) :')\n", "fidle.scrawler.images(clean_test[:5], None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, save_as=None)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 7 - Evaluation\n", "**Note :** We will use the following data:\\\n", "`clean_train`, `clean_test` for noiseless images \\\n", "`noisy_train`, `noisy_test` for noisy images\\\n", "`denoised_test` for denoised images at the output of the model\n", " \n", "### 7.1 - Reload model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "encoder = keras.models.load_model(f'{run_dir}/models/encoder.keras')\n", "decoder = keras.models.load_model(f'{run_dir}/models/decoder.keras')\n", "\n", "inputs = keras.Input(shape=(28, 28, 1))\n", "\n", "latents = encoder(inputs)\n", "outputs = decoder(latents)\n", "\n", "ae_reloaded = keras.Model(inputs,outputs, name=\"ae\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 7.2 - Let's make a prediction" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "denoised_test = ae_reloaded.predict(noisy_test, verbose=0)\n", "\n", "print('Denoised images (denoised_test) shape : ',denoised_test.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 7.3 - Denoised images " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "i=random.randint(0,len(denoised_test)-8)\n", "j=i+8\n", "\n", "fidle.utils.subtitle('Noisy test images (input):')\n", "fidle.scrawler.images(noisy_test[i:j], None, indices='all', columns=8, x_size=2,y_size=2, interpolation=None, save_as='05-test-noisy')\n", "\n", "fidle.utils.subtitle('Denoised images (output):')\n", "fidle.scrawler.images(denoised_test[i:j], None, indices='all', columns=8, x_size=2,y_size=2, interpolation=None, save_as='06-test-predict')\n", "\n", "fidle.utils.subtitle('Real test images :')\n", "fidle.scrawler.images(clean_test[i:j], None, indices='all', columns=8, x_size=2,y_size=2, interpolation=None, save_as='07-test-real')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fidle.end()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "<img width=\"80px\" src=\"../fidle/img/logo-paysage.svg\"></img>" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.9.2 ('fidle-env')", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.2" }, "vscode": { "interpreter": { "hash": "b3929042cc22c1274d74e3e946c52b845b57cb6d84f2d591ffe0519b38e4896d" } } }, "nbformat": 4, "nbformat_minor": 4 }