From 79257d6d3cb77ba8b3beffc66bfec27661b90618 Mon Sep 17 00:00:00 2001 From: Jean-Luc Parouty <Jean-Luc.Parouty@grenoble-inp.fr> Date: Fri, 11 Sep 2020 10:04:04 +0200 Subject: [PATCH] Update README.md --- README.md | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index e8030ab..58aa36b 100644 --- a/README.md +++ b/README.md @@ -63,12 +63,16 @@ Some other useful informations are also available in the [wiki](https://gricad-g Episode 4 : Improving the results with data augmentation [[GTS5] - CNN with GTSRB dataset - Full convolutions ](GTSRB/05-Full-convolutions.ipynb) Episode 5 : A lot of models, a lot of datasets and a lot of results. -[[GTS6] - CNN with GTSRB dataset - Full convolutions as a batch](GTSRB/06-Full-convolutions-batch.ipynb) +[[GTS6] - CNN with GTSRB dataset - Full convolutions as a batch](GTSRB/06-Notebook-as-a-batch.ipynb) Episode 6 : Run Full convolution notebook as a batch -[[GTS7] - Full convolutions Report](GTSRB/07-Full-convolutions-reports.ipynb) +[[GTS7] - CNN with GTSRB dataset - Show reports](GTSRB/07-Show-report.ipynb) Episode 7 : Displaying the reports of the different jobs [[TSB1] - Tensorboard with/from Jupyter ](GTSRB/99-Scripts-Tensorboard.ipynb) 4 ways to use Tensorboard from the Jupyter environment +[[BASH1] - OAR batch script](GTSRB/batch_oar.sh) + Bash script for OAR batch submission of GTSRB notebook +[[BASH2] - SLURM batch script](GTSRB/batch_slurm.sh) + Bash script for SLURM batch submission of GTSRB notebooks [[IMDB1] - Text embedding with IMDB](IMDB/01-Embedding-Keras.ipynb) A very classical example of word embedding for text classification (sentiment analysis) [[IMDB2] - Text embedding with IMDB - Reloaded](IMDB/02-Prediction.ipynb) @@ -83,11 +87,13 @@ Some other useful informations are also available in the [wiki](https://gricad-g Episode 3: Attempt to predict in the longer term [[VAE1] - Variational AutoEncoder (VAE) with MNIST](VAE/01-VAE-with-MNIST.ipynb) Episode 1 : Model construction and Training +[[VAE1] - Variational AutoEncoder (VAE) with MNIST](VAE/01-VAE-with-MNIST.nbconvert.ipynb) + Episode 1 : Model construction and Training [[VAE2] - Variational AutoEncoder (VAE) with MNIST - Analysis](VAE/02-VAE-with-MNIST-post.ipynb) Episode 2 : Exploring our latent space [[VAE3] - About the CelebA dataset](VAE/03-About-CelebA.ipynb) Episode 3Â : About the CelebA dataset, a more fun dataset ! -[[VAE4] - Preparation of the CelebA dataset](VAE/04-Prepare-CelebA-batch.ipynb) +[[VAE4] - Preparation of the CelebA dataset](VAE/04-Prepare-CelebA-datasets.ipynb) Episode 4Â : Preparation of a clustered dataset, batchable [[VAE5] - Checking the clustered CelebA dataset](VAE/05-Check-CelebA.ipynb) Episode 5Â :\tChecking the clustered dataset @@ -97,10 +103,10 @@ Some other useful informations are also available in the [wiki](https://gricad-g Episode 7Â : Variational AutoEncoder (VAE) with CelebA (medium res.) [[VAE8] - Variational AutoEncoder (VAE) with CelebA - Analysis](VAE/08-VAE-withCelebA-post.ipynb) Episode 8Â : Exploring latent space of our trained models -[[BASH1] - OAR batch script](VAE/batch-oar.sh) - Bash script for OAR batch submission of a notebook -[[BASH2] - SLURM batch script](VAE/batch-slurm.sh) - Bash script for SLURM batch submission of a notebook +[[BASH1] - OAR batch script](VAE/batch_oar.sh) + Bash script for OAR batch submission of VAE notebook +[[BASH2] - SLURM batch script](VAE/batch_slurm.sh) + Bash script for SLURM batch submission of VAE notebooks [[ACTF1] - Activation functions](Misc/Activation-Functions.ipynb) Some activation functions, with their derivatives. [[NP1] - A short introduction to Numpy](Misc/Numpy.ipynb) -- GitLab