{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img width=\"800px\" src=\"../fidle/img/header.svg\"></img>\n",
    "\n",
    "# <!-- TITLE --> [K3GTSRB1] - Dataset analysis and preparation\n",
    "<!-- DESC --> Episode 1 : Analysis of the GTSRB dataset and creation of an enhanced dataset\n",
    "<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
    "\n",
    "## Objectives :\n",
    " - Understand the **complexity associated with data**, even when it is only images\n",
    " - Learn how to build up a simple and **usable image dataset**\n",
    "\n",
    "The German Traffic Sign Recognition Benchmark (GTSRB) is a dataset with more than 50,000 photos of road signs from about 40 classes.  \n",
    "The final aim is to recognise them !  \n",
    "\n",
    "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
    "\n",
    "\n",
    "## What we're going to do :\n",
    "\n",
    " - Understanding the dataset\n",
    " - Preparing and formatting enhanced data\n",
    " - Save enhanced datasets in h5 file format\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 1 -  Import and init"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os, time, sys\n",
    "import csv\n",
    "import math, random\n",
    "\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "import h5py\n",
    "\n",
    "from skimage.morphology import disk\n",
    "from skimage.util import img_as_ubyte\n",
    "from skimage.filters import rank\n",
    "from skimage import io, color, exposure, transform\n",
    "\n",
    "from importlib import reload\n",
    "\n",
    "import fidle\n",
    "\n",
    "# Init Fidle environment\n",
    "run_id, run_dir, datasets_dir = fidle.init('K3GTSRB1')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 2 - Parameters\n",
    "The generation of datasets may require some time and space : **10' and 10 GB**.  \n",
    "\n",
    "You can choose to perform tests or generate the whole enhanced dataset by setting the following parameters:  \n",
    "`scale` : 1 mean 100% of the dataset - set 0.2 for tests (need 2 minutes with scale = 0.2)  \n",
    "`progress_verbosity`: Verbosity of progress bar: 0=silent, 1=progress bar, 2=One line  \n",
    "`output_dir` : where to write enhanced dataset, could be :\n",
    " - `./data`, for tests purpose\n",
    " - `<datasets_dir>/GTSRB/enhanced` to add clusters in your datasets dir.  \n",
    " \n",
    "Uncomment the right lines according to what you want :"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# ---- For smart tests :\n",
    "#\n",
    "scale      = 0.2\n",
    "output_dir = './data' \n",
    "\n",
    "# ---- For a Full dataset generation :\n",
    "#\n",
    "# scale      = 1\n",
    "# output_dir = f'{datasets_dir}/GTSRB/enhanced'\n",
    "\n",
    "# ---- Verbosity\n",
    "#\n",
    "progress_verbosity = 2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Override parameters (batch mode) - Just forget this cell"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fidle.override('scale', 'output_dir', 'progress_verbosity')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 3 - Read the dataset\n",
    "Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset\n",
    " - Each directory contains one CSV file with annotations : `GT-<ClassID>.csv` and the training images\n",
    " - First line is fieldnames: `Filename ; Width ; Height ; Roi.X1 ; Roi.Y1 ; Roi.X2 ; Roi.Y2 ; ClassId`\n",
    "    \n",
    "### 3.1 - Understanding the dataset\n",
    "The original dataset is in : **\\<dataset_dir\\>/GTSRB/origine.**  \n",
    "There is 3 subsets : **Train**, **Test** and **Meta.**  \n",
    "Each subset have an **csv file** and a **subdir** with **images**.\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df = pd.read_csv(f'{datasets_dir}/GTSRB/origine/Test.csv', header=0)\n",
    "display(df.head(10))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 - Usefull functions\n",
    "A nice function for reading a dataset from an index.csv file.\\\n",
    "Input: an intex.csv file\\\n",
    "Output: an array of images ans an array of corresponding labels\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def  read_csv_dataset(csv_file): \n",
    "    '''\n",
    "    Reads traffic sign data from German Traffic Sign Recognition Benchmark dataset.\n",
    "    Arguments:  \n",
    "        csv filename :  Description file, Example /data/GTSRB/Train.csv\n",
    "    Returns:\n",
    "        x,y          :  np array of images, np array of corresponding labels\n",
    "    '''\n",
    "\n",
    "    path = os.path.dirname(csv_file)\n",
    "    name = os.path.basename(csv_file)\n",
    "\n",
    "    # ---- Read csv file\n",
    "    #\n",
    "    df = pd.read_csv(csv_file, header=0)\n",
    "    \n",
    "    # ---- Get filenames and ClassIds\n",
    "    #\n",
    "    filenames = df['Path'].to_list()\n",
    "    y         = df['ClassId'].to_list()\n",
    "    x         = []\n",
    "    \n",
    "    # ---- Read images\n",
    "    #\n",
    "    for filename in filenames:\n",
    "        image=io.imread(f'{path}/{filename}')\n",
    "        x.append(image)\n",
    "        fidle.utils.update_progress(name,len(x),len(filenames), verbosity=progress_verbosity)\n",
    "    \n",
    "    # ---- Return\n",
    "    #\n",
    "    return np.array(x,dtype=object),np.array(y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 - Read the data\n",
    "We will read the following datasets:\n",
    " - **Train** subset, for learning data as :  `x_train, y_train`\n",
    " - **Test** subset, for validation data as :  `x_test, y_test`\n",
    " - **Meta** subset, for visualisation as : `x_meta, y_meta`\n",
    " \n",
    "The learning data will be randomly mixted and the illustration data (Meta) sorted.  \n",
    "Will take about 1'30s on HPC or 45s on my labtop."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "chrono=fidle.Chrono()\n",
    "\n",
    "chrono.start()\n",
    "\n",
    "# ---- Read datasets\n",
    "\n",
    "(x_train,y_train) = read_csv_dataset(f'{datasets_dir}/GTSRB/origine/Train.csv')\n",
    "(x_test ,y_test)  = read_csv_dataset(f'{datasets_dir}/GTSRB/origine/Test.csv')\n",
    "(x_meta ,y_meta)  = read_csv_dataset(f'{datasets_dir}/GTSRB/origine/Meta.csv')\n",
    "    \n",
    "# ---- Shuffle train set\n",
    "\n",
    "x_train, y_train = fidle.utils.shuffle_np_dataset(x_train, y_train)\n",
    "\n",
    "# ---- Sort Meta\n",
    "\n",
    "combined = list(zip(x_meta,y_meta))\n",
    "combined.sort(key=lambda x: x[1])\n",
    "x_meta,y_meta = zip(*combined)\n",
    "\n",
    "chrono.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 4 - Few statistics about train dataset\n",
    "We want to know if our images are homogeneous in terms of size, ratio, width or height.\n",
    "\n",
    "### 4.1 - Do statistics "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_size  = []\n",
    "train_ratio = []\n",
    "train_lx    = []\n",
    "train_ly    = []\n",
    "\n",
    "test_size   = []\n",
    "test_ratio  = []\n",
    "test_lx     = []\n",
    "test_ly     = []\n",
    "\n",
    "for image in x_train:\n",
    "    (lx,ly,lz) = image.shape\n",
    "    train_size.append(lx*ly/1024)\n",
    "    train_ratio.append(lx/ly)\n",
    "    train_lx.append(lx)\n",
    "    train_ly.append(ly)\n",
    "\n",
    "for image in x_test:\n",
    "    (lx,ly,lz) = image.shape\n",
    "    test_size.append(lx*ly/1024)\n",
    "    test_ratio.append(lx/ly)\n",
    "    test_lx.append(lx)\n",
    "    test_ly.append(ly)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.2 - Show statistics"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "figsize=(10,4)\n",
    "# ------ Global stuff\n",
    "print(\"x_train shape : \",x_train.shape)\n",
    "print(\"y_train shape : \",y_train.shape)\n",
    "print(\"x_test  shape : \",x_test.shape)\n",
    "print(\"y_test  shape : \",y_test.shape)\n",
    "\n",
    "# ------ Statistics / sizes\n",
    "plt.figure(figsize=figsize)\n",
    "plt.hist([train_size,test_size], bins=100)\n",
    "plt.gca().set(title='Sizes in Kpixels - Train=[{:5.2f}, {:5.2f}]'.format(min(train_size),max(train_size)), \n",
    "              ylabel='Population', xlim=[0,30])\n",
    "plt.legend(['Train','Test'])\n",
    "fidle.scrawler.save_fig('01-stats-sizes')\n",
    "plt.show()\n",
    "\n",
    "# ------ Statistics / ratio lx/ly\n",
    "plt.figure(figsize=figsize)\n",
    "plt.hist([train_ratio,test_ratio], bins=100)\n",
    "plt.gca().set(title='Ratio lx/ly - Train=[{:5.2f}, {:5.2f}]'.format(min(train_ratio),max(train_ratio)), \n",
    "              ylabel='Population', xlim=[0.8,1.2])\n",
    "plt.legend(['Train','Test'])\n",
    "fidle.scrawler.save_fig('02-stats-ratios')\n",
    "plt.show()\n",
    "\n",
    "# ------ Statistics / lx\n",
    "plt.figure(figsize=figsize)\n",
    "plt.hist([train_lx,test_lx], bins=100)\n",
    "plt.gca().set(title='Images lx - Train=[{:5.2f}, {:5.2f}]'.format(min(train_lx),max(train_lx)), \n",
    "              ylabel='Population', xlim=[20,150])\n",
    "plt.legend(['Train','Test'])\n",
    "fidle.scrawler.save_fig('03-stats-lx')\n",
    "plt.show()\n",
    "\n",
    "# ------ Statistics / ly\n",
    "plt.figure(figsize=figsize)\n",
    "plt.hist([train_ly,test_ly], bins=100)\n",
    "plt.gca().set(title='Images ly - Train=[{:5.2f}, {:5.2f}]'.format(min(train_ly),max(train_ly)), \n",
    "              ylabel='Population', xlim=[20,150])\n",
    "plt.legend(['Train','Test'])\n",
    "fidle.scrawler.save_fig('04-stats-ly')\n",
    "plt.show()\n",
    "\n",
    "# ------ Statistics / classId\n",
    "plt.figure(figsize=figsize)\n",
    "plt.hist([y_train,y_test], bins=43)\n",
    "plt.gca().set(title='ClassesId', ylabel='Population', xlim=[0,43])\n",
    "plt.legend(['Train','Test'])\n",
    "fidle.scrawler.save_fig('05-stats-classes')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 5 - List of classes\n",
    "What are the 43 classes of our images..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fidle.scrawler.images( x_meta,y_meta, range(43), columns=8, x_size=1.4, y_size=1.4, \n",
    "                       colorbar=False, y_pred=None, cm='binary', save_as='06-meta-signs')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 6 - What does it really look like"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# ---- Get and show few images\n",
    "\n",
    "samples = [ random.randint(0,len(x_train)-1) for i in range(32)]\n",
    "fidle.scrawler.images( x_train,y_train, samples, columns=8, x_size=1.5, y_size=1.5, \n",
    "                       colorbar=False, y_pred=None, cm='binary', save_as='07-real-signs')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 7 - dataset cooking...\n",
    "\n",
    "Images **must** :\n",
    " - have the **same size** to match the size of the network,\n",
    " - be **normalized**.  \n",
    " \n",
    "It is possible to work on **rgb** or **monochrome** images and to **equalize** the histograms.   \n",
    "\n",
    "See : [Exposure with scikit-image](https://scikit-image.org/docs/dev/api/skimage.exposure.html)  \n",
    "See : [Local histogram equalization](https://scikit-image.org/docs/dev/api/skimage.filters.rank.html#skimage.filters.rank.equalize)  \n",
    "See : [Histogram equalization](https://scikit-image.org/docs/dev/api/skimage.exposure.html#skimage.exposure.equalize_hist)  \n",
    "\n",
    "### 7.1 - Enhancement cooking\n",
    "A nice function for preparing our data.  \n",
    "Input: a set of images (numpy array)  \n",
    "Output: a enhanced images, resized and reprocessed (numpy array)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def images_enhancement(images, width=25, height=25, proc='RGB'):\n",
    "    '''\n",
    "    Resize and convert images - doesn't change originals.\n",
    "    input images must be RGBA or RGB.\n",
    "    Note : all outputs are fixed size numpy array of float32\n",
    "    args:\n",
    "        images :         images list\n",
    "        width,height :   new images size (25,25)\n",
    "        mode :           RGB | RGB-HE | L | L-HE | L-LHE | L-CLAHE\n",
    "    return:\n",
    "        numpy array of enhanced images\n",
    "    '''\n",
    "    lz={ 'RGB':3, 'RGB-HE':3, 'L':1, 'L-HE':1, 'L-LHE':1, 'L-CLAHE':1}[proc]\n",
    "    \n",
    "    out=[]\n",
    "    for img in images:\n",
    "        \n",
    "        # ---- if RGBA, convert to RGB\n",
    "        if img.shape[2]==4:\n",
    "            img=color.rgba2rgb(img)\n",
    "            \n",
    "        # ---- Resize\n",
    "        img = transform.resize(img, (width,height))\n",
    "\n",
    "        # ---- RGB / Histogram Equalization\n",
    "        if proc=='RGB-HE':\n",
    "            hsv = color.rgb2hsv(img.reshape(width,height,3))\n",
    "            hsv[:, :, 2] = exposure.equalize_hist(hsv[:, :, 2])\n",
    "            img = color.hsv2rgb(hsv)\n",
    "        \n",
    "        # ---- Grayscale\n",
    "        if proc=='L':\n",
    "            img=color.rgb2gray(img)\n",
    "            \n",
    "        # ---- Grayscale / Histogram Equalization\n",
    "        if proc=='L-HE':\n",
    "            img=color.rgb2gray(img)\n",
    "            img=exposure.equalize_hist(img)\n",
    "            \n",
    "        # ---- Grayscale / Local Histogram Equalization\n",
    "        if proc=='L-LHE':        \n",
    "            img=color.rgb2gray(img)\n",
    "            img = img_as_ubyte(img)\n",
    "            img=rank.equalize(img, disk(10))/255.\n",
    "        \n",
    "        # ---- Grayscale / Contrast Limited Adaptive Histogram Equalization (CLAHE)\n",
    "        if proc=='L-CLAHE':\n",
    "            img=color.rgb2gray(img)\n",
    "            img=exposure.equalize_adapthist(img)\n",
    "            \n",
    "        # ---- Add image in list of list\n",
    "        out.append(img)\n",
    "        fidle.utils.update_progress('Enhancement: ',len(out),len(images))\n",
    "\n",
    "    # ---- Reshape images\n",
    "    #     (-1, width,height,1) for L\n",
    "    #     (-1, width,height,3) for RGB\n",
    "    #\n",
    "    out = np.array(out,dtype='float32')\n",
    "    out = out.reshape(-1,width,height,lz)\n",
    "    return out"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7.2 - To get an idea of the different recipes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "i=random.randint(0,len(x_train)-16)\n",
    "x_samples = x_train[i:i+16]\n",
    "y_samples = y_train[i:i+16]\n",
    "\n",
    "datasets  = {}\n",
    "\n",
    "datasets['RGB']      = images_enhancement( x_samples, width=25, height=25, proc='RGB'  )\n",
    "datasets['RGB-HE']   = images_enhancement( x_samples, width=25, height=25, proc='RGB-HE'  )\n",
    "datasets['L']        = images_enhancement( x_samples, width=25, height=25, proc='L'  )\n",
    "datasets['L-HE']     = images_enhancement( x_samples, width=25, height=25, proc='L-HE'  )\n",
    "datasets['L-LHE']    = images_enhancement( x_samples, width=25, height=25, proc='L-LHE'  )\n",
    "datasets['L-CLAHE']  = images_enhancement( x_samples, width=25, height=25, proc='L-CLAHE'  )\n",
    "\n",
    "fidle.utils.subtitle('EXPECTED')\n",
    "x_expected=[ x_meta[i] for i in y_samples]\n",
    "fidle.scrawler.images(x_expected, y_samples, range(12), columns=12, x_size=1, y_size=1,\n",
    "                colorbar=False, y_pred=None, cm='binary', save_as='08-expected')\n",
    "\n",
    "fidle.utils.subtitle('ORIGINAL')\n",
    "fidle.scrawler.images(x_samples,  y_samples, range(12), columns=12, x_size=1, y_size=1, \n",
    "                colorbar=False, y_pred=None, cm='binary', save_as='09-original')\n",
    "\n",
    "fidle.utils.subtitle('ENHANCED')\n",
    "n=10\n",
    "for k,d in datasets.items():\n",
    "    print(\"dataset : {}  min,max=[{:.3f},{:.3f}]  shape={}\".format(k,d.min(),d.max(), d.shape))\n",
    "    fidle.scrawler.images(d, y_samples, range(12), columns=12, x_size=1, y_size=1, \n",
    "                    colorbar=False, y_pred=None, cm='binary', save_as=f'{n}-enhanced-{k}')\n",
    "    n+=1\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7.3 - Cook and save\n",
    "\n",
    "A function to save a dataset (h5 file)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def save_h5_dataset(x_train, y_train, x_test, y_test, x_meta,y_meta, filename):\n",
    "        \n",
    "    # ---- Create h5 file\n",
    "    with h5py.File(filename, \"w\") as f:\n",
    "        f.create_dataset(\"x_train\", data=x_train)\n",
    "        f.create_dataset(\"y_train\", data=y_train)\n",
    "        f.create_dataset(\"x_test\",  data=x_test)\n",
    "        f.create_dataset(\"y_test\",  data=y_test)\n",
    "        f.create_dataset(\"x_meta\",  data=x_meta)\n",
    "        f.create_dataset(\"y_meta\",  data=y_meta)\n",
    "        \n",
    "    # ---- done\n",
    "    size=os.path.getsize(filename)/(1024*1024)\n",
    "    print('Dataset : {:24s}  shape : {:22s} size : {:6.1f} Mo   (saved)'.format(filename, str(x_train.shape),size))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Generate enhanced datasets :"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# ---- Size and processings\n",
    "#\n",
    "all_size= [24, 48]\n",
    "all_proc=['RGB', 'RGB-HE', 'L', 'L-LHE']\n",
    "\n",
    "# ---- Do it\n",
    "#\n",
    "chrono.start()\n",
    "\n",
    "n_train = int( len(x_train)*scale )\n",
    "n_test  = int( len(x_test)*scale )\n",
    "\n",
    "fidle.utils.subtitle('Parameters :')\n",
    "print(f'Scale is : {scale}')\n",
    "print(f'x_train length is : {n_train}')\n",
    "print(f'x_test  length is : {n_test}')\n",
    "print(f'output dir is     : {output_dir}\\n')\n",
    "\n",
    "fidle.utils.subtitle('Running...')\n",
    "\n",
    "fidle.utils.mkdir(output_dir)\n",
    "\n",
    "for s in all_size:\n",
    "    for m in all_proc:\n",
    "        # ---- A nice dataset name\n",
    "        filename = f'{output_dir}/set-{s}x{s}-{m}.h5'\n",
    "        fidle.utils.subtitle(f'Dataset : {filename}')\n",
    "        \n",
    "        # ---- Enhancement\n",
    "        #      Note : x_train is a numpy array of python objects (images with <> sizes)\n",
    "        #             but images_enhancement() return a real array of float64 numpy (images with same size)\n",
    "        #             so, we can save it in nice h5 files\n",
    "        #\n",
    "        x_train_new = images_enhancement( x_train[:n_train], width=s, height=s, proc=m )\n",
    "        x_test_new  = images_enhancement( x_test[:n_test],   width=s, height=s, proc=m )\n",
    "        x_meta_new  = images_enhancement( x_meta,            width=s, height=s, proc='RGB' )\n",
    "        \n",
    "        # ---- Save\n",
    "        save_h5_dataset( x_train_new, y_train[:n_train], x_test_new, y_test[:n_test], x_meta_new,y_meta, filename)\n",
    "\n",
    "x_train_new,x_test_new=0,0\n",
    "\n",
    "print('\\nDone.')\n",
    "chrono.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 8 - Reload data to be sure ;-)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "chrono.start()\n",
    "\n",
    "dataset='set-24x24-L'\n",
    "samples=range(24)\n",
    "\n",
    "with  h5py.File(f'{output_dir}/{dataset}.h5','r') as f:\n",
    "    x_tmp = f['x_train'][:]\n",
    "    y_tmp = f['y_train'][:]\n",
    "    print(\"dataset loaded from h5 file.\")\n",
    "\n",
    "fidle.scrawler.images(x_tmp,y_tmp, samples, columns=8, x_size=1.5, y_size=1.5, \n",
    "                colorbar=False, y_pred=None, cm='binary', save_as='16-enhanced_images')\n",
    "x_tmp,y_tmp=0,0\n",
    "\n",
    "chrono.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fidle.end()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "<img width=\"80px\" src=\"../fidle/img/logo-paysage.svg\"></img>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.9.2 ('fidle-env')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.2"
  },
  "vscode": {
   "interpreter": {
    "hash": "b3929042cc22c1274d74e3e946c52b845b57cb6d84f2d591ffe0519b38e4896d"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}