Newer
Older
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img width=\"800px\" src=\"../fidle/img/header.svg\"></img>\n",
"\n",
"# <!-- TITLE --> [PLSHEEP3] - A DCGAN to Draw a Sheep, using Pytorch Lightning\n",
"<!-- DESC --> \"Draw me a sheep\", revisited with a DCGAN, using Pytorch Lightning\n",
"<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->\n",
"\n",
"## Objectives :\n",
" - Build and train a DCGAN model with the Quick Draw dataset\n",
" - Understanding DCGAN\n",
"\n",
"The [Quick draw dataset](https://quickdraw.withgoogle.com/data) contains about 50.000.000 drawings, made by real people... \n",
"We are using a subset of 117.555 of Sheep drawings \n",
"To get the dataset : [https://github.com/googlecreativelab/quickdraw-dataset](https://github.com/googlecreativelab/quickdraw-dataset) \n",
"Datasets in numpy bitmap file : [https://console.cloud.google.com/storage/quickdraw_dataset/full/numpy_bitmap](https://console.cloud.google.com/storage/quickdraw_dataset/full/numpy_bitmap) \n",
"Sheep dataset : [https://storage.googleapis.com/quickdraw_dataset/full/numpy_bitmap/sheep.npy](https://storage.googleapis.com/quickdraw_dataset/full/numpy_bitmap/sheep.npy) (94.3 Mo)\n",
"\n",
"\n",
"## What we're going to do :\n",
"\n",
" - Have a look to the dataset\n",
" - Defining a GAN model\n",
" - Build the model\n",
" - Train it\n",
" - Have a look of the results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 1 - Init and parameters\n",
"#### Python init"
]
},
{
"cell_type": "code",
"metadata": {
"tags": []
},
"source": [
"import os\n",
"import sys\n",
"import shutil\n",
"\n",
"import numpy as np\n",
"import torch\n",
"from lightning import Trainer\n",
"from lightning.pytorch.callbacks import ModelCheckpoint\n",
"from lightning.pytorch.loggers.tensorboard import TensorBoardLogger\n",
"\n",
"import fidle\n",
"\n",
"from modules.QuickDrawDataModule import QuickDrawDataModule\n",
"\n",
"from modules.GAN import GAN\n",
"from modules.WGANGP import WGANGP\n",
"from modules.Generators import *\n",
"from modules.Discriminators import *\n",
"\n",
"# Init Fidle environment\n",
"run_id, run_dir, datasets_dir = fidle.init('PLSHEEP3')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Few parameters\n",
"scale=1, epochs=20 : Need 22' on a V100"
]
},
{
"cell_type": "code",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"latent_dim = 128\n",
"\n",
"gan_name = 'WGANGP'\n",
"generator_name = 'Generator_2'\n",
"discriminator_name = 'Discriminator_3'\n",
" \n",
"lr = 0.0001\n",
"b1 = 0.5\n",
"b2 = 0.999\n",
"lambda_gp = 10\n",
"batch_size = 64\n",
"num_img = 48\n",
"fit_verbosity = 2\n",
" \n",
"dataset_file = datasets_dir+'/QuickDraw/origine/sheep.npy' \n",
"data_shape = (28,28,1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Override parameters (batch mode) - Just forget this cell"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"fidle.override('latent_dim', 'gan_name', 'generator_name', 'discriminator_name') \n",
"fidle.override('epochs', 'lr', 'b1', 'b2', 'batch_size', 'num_img', 'fit_verbosity')\n",
"fidle.override('dataset_file', 'data_shape', 'scale', 'num_workers' )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Cleaning"
]
},
{
"cell_type": "code",
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
"metadata": {},
"outputs": [],
"source": [
"# You can comment these lines to keep each run...\n",
"shutil.rmtree(f'{run_dir}/figs', ignore_errors=True)\n",
"shutil.rmtree(f'{run_dir}/models', ignore_errors=True)\n",
"shutil.rmtree(f'{run_dir}/tb_logs', ignore_errors=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 2 - Get some nice data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Get a Nice DataModule\n",
"Our DataModule is defined in [./modules/QuickDrawDataModule.py](./modules/QuickDrawDataModule.py) \n",
"This is a [LightningDataModule](https://pytorch-lightning.readthedocs.io/en/stable/data/datamodule.html)"
]
},
{
"cell_type": "code",
"metadata": {
"tags": []
},
"dm = QuickDrawDataModule(dataset_file, scale, batch_size, num_workers=num_workers)\n",
"dm.setup()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Have a look"
]
},
{
"cell_type": "code",
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
"source": [
"dl = dm.train_dataloader()\n",
"batch_data = next(iter(dl))\n",
"\n",
"fidle.scrawler.images( batch_data.reshape(-1,28,28), indices=range(batch_size), columns=12, x_size=1, y_size=1, \n",
" y_padding=0,spines_alpha=0, save_as='01-Sheeps')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 3 - Get a nice GAN model\n",
"\n",
"Our Generators are defined in [./modules/Generators.py](./modules/Generators.py) \n",
"Our Discriminators are defined in [./modules/Discriminators.py](./modules/Discriminators.py) \n",
"\n",
"\n",
"Our GANs are defined in :\n",
" - [./modules/GAN.py](./modules/GAN.py) \n",
" - [./modules/WGANGP.py](./modules/WGANGP.py) \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve class by name\n",
"To be very flexible, we just specify class names as parameters. \n",
"The code below retrieves classes from their names."
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"module=sys.modules['__main__']\n",
"Generator_ = getattr(module, generator_name)\n",
"Discriminator_ = getattr(module, discriminator_name)\n",
"GAN_ = getattr(module, gan_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Basic test - Just to be sure it (could) works... ;-)"
]
},
{
"cell_type": "code",
"metadata": {
"tags": []
},
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
"source": [
"generator = Generator_( latent_dim=latent_dim, data_shape=data_shape )\n",
"discriminator = Discriminator_( latent_dim=latent_dim, data_shape=data_shape )\n",
"\n",
"print('\\nFew tests :\\n')\n",
"z = torch.randn(batch_size, latent_dim)\n",
"print('z size : ',z.size())\n",
"\n",
"fake_img = generator.forward(z)\n",
"print('fake_img : ', fake_img.size())\n",
"\n",
"p = discriminator.forward(fake_img)\n",
"print('pred fake : ', p.size())\n",
"\n",
"print('batch_data : ',batch_data.size())\n",
"\n",
"p = discriminator.forward(batch_data)\n",
"print('pred real : ', p.size())\n",
"\n",
"print('\\nShow fake images :')\n",
"nimg = fake_img.detach().numpy()\n",
"fidle.scrawler.images( nimg.reshape(-1,28,28), indices=range(batch_size), columns=12, x_size=1, y_size=1, \n",
" y_padding=0,spines_alpha=0, save_as='01-Sheeps')"
]
},
{
"cell_type": "code",
"source": [
"print('Fake images : ', fake_img.size())\n",
"print('Batch size : ', batch_data.size())\n",
"e = torch.distributions.uniform.Uniform(0, 1).sample([batch_size,1])\n",
"e = e[:None,None,None]\n",
"i = fake_img * e + (1-e)*batch_data\n",
"\n",
"print('\\ninterpolate images :')\n",
"nimg = i.detach().numpy()\n",
"fidle.scrawler.images( nimg.reshape(-1,28,28), indices=range(batch_size), columns=12, x_size=1, y_size=1, \n",
" y_padding=0,spines_alpha=0, save_as='01-Sheeps')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### GAN model\n",
"To simplify our code, the GAN class is defined separately in the module [./modules/GAN.py](./modules/GAN.py) \n",
"Passing the classe names for generator/discriminator by parameter allows to stay modular and to use the PL checkpoints."
]
},
{
"cell_type": "code",
"source": [
"gan = GAN_( data_shape = data_shape,\n",
" lr = lr,\n",
" b1 = b1,\n",
" b2 = b2,\n",
" lambda_gp = lambda_gp,\n",
" batch_size = batch_size, \n",
" latent_dim = latent_dim, \n",
" generator_name = generator_name, \n",
" discriminator_name = discriminator_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 5 - Train it !\n",
"#### Instantiate Callbacks, Logger & co.\n",
"More about :\n",
"- [Checkpoints](https://pytorch-lightning.readthedocs.io/en/stable/common/checkpointing_basic.html)\n",
"- [modelCheckpoint](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.callbacks.ModelCheckpoint.html#pytorch_lightning.callbacks.ModelCheckpoint)"
]
},
{
"cell_type": "code",
"metadata": {
"tags": []
},
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
"source": [
"\n",
"# ---- for tensorboard logs\n",
"#\n",
"logger = TensorBoardLogger( save_dir = f'{run_dir}',\n",
" name = 'tb_logs' )\n",
"\n",
"log_dir = os.path.abspath(f'{run_dir}/tb_logs')\n",
"print('To access the logs with tensorboard, use this command line :')\n",
"print(f'tensorboard --logdir {log_dir}')\n",
"\n",
"# ---- To save checkpoints\n",
"#\n",
"callback_checkpoints = ModelCheckpoint( dirpath = f'{run_dir}/models', \n",
" filename = 'bestModel', \n",
" save_top_k = 1, \n",
" save_last = True,\n",
" every_n_epochs = 1, \n",
" monitor = \"g_loss\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Train it"
]
},
{
"cell_type": "code",
"metadata": {
"tags": []
},
"source": [
"\n",
"trainer = Trainer(\n",
" accelerator = \"auto\",\n",
" max_epochs = epochs,\n",
" callbacks = [callback_checkpoints],\n",
" log_every_n_steps = batch_size,\n",
" logger = logger\n",
")\n",
"\n",
"trainer.fit(gan, dm)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 6 - Reload our best model\n",
"Note : "
]
},
{
"cell_type": "code",
"gan = GAN.load_from_checkpoint(f'{run_dir}/models/bestModel.ckpt')"
"source": [
"nb_images = 96\n",
"\n",
"z = torch.randn(nb_images, latent_dim)\n",
"print('z size : ',z.size())\n",
"\n",
"if torch.cuda.is_available(): z=z.cuda()\n",
"\n",
"fake_img = gan.generator.forward(z)\n",
"print('fake_img : ', fake_img.size())\n",
"\n",
"fidle.scrawler.images( nimg.reshape(-1,28,28), indices=range(nb_images), columns=12, x_size=1, y_size=1, \n",
" y_padding=0,spines_alpha=0, save_as='01-Sheeps')"
]
},
{
"cell_type": "code",
"source": [
"fidle.end()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"<img width=\"80px\" src=\"../fidle/img/logo-paysage.svg\"></img>"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "fidle-env",
"language": "python",
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
}
},
"nbformat": 4,
"nbformat_minor": 4
}