Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • daconcea/fidle
  • bossardl/fidle
  • Julie.Remenant/fidle
  • abijolao/fidle
  • monsimau/fidle
  • karkars/fidle
  • guilgautier/fidle
  • cailletr/fidle
  • talks/fidle
9 results
Show changes
Showing
with 6702 additions and 0 deletions
%% Cell type:markdown id: tags:
Text Embedding - IMDB dataset
=============================
---
Introduction au Deep Learning (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020
## Reviews analysis :
The objective is to guess whether our new and personals films reviews are **positive or negative** .
For this, we will use our previously saved model.
What we're going to do:
- Preparing the data
- Retrieve our saved model
- Evaluate the result
%% Cell type:markdown id: tags:
## Step 1 - Init python stuff
%% Cell type:code id: tags:
``` python
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.datasets.imdb as imdb
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import pandas as pd
import os,sys,h5py,json,re
from importlib import reload
sys.path.append('..')
import fidle.pwk as ooo
ooo.init()
```
%% Output
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-94e372328354> in <module>
7 import matplotlib.pyplot as plt
8 import matplotlib
----> 9 import seaborn as sns
10 import pandas as pd
11
ModuleNotFoundError: No module named 'seaborn'
%% Cell type:markdown id: tags:
## Step 2 : Preparing the data
### 2.1 - Our reviews :
%% Cell type:code id: tags:
``` python
reviews = [ "This film is particularly nice, a must see.",
"Some films are classics and cannot be ignored.",
"This movie is just abominable and doesn't deserve to be seen!"]
```
%% Cell type:markdown id: tags:
### 2.2 - Retrieve dictionaries
%% Cell type:code id: tags:
``` python
with open('./data/word_index.json', 'r') as fp:
word_index = json.load(fp)
index_word = {index:word for word,index in word_index.items()}
```
%% Cell type:markdown id: tags:
### 2.3 - Clean, index and padd
%% Cell type:code id: tags:
``` python
max_len = 256
vocab_size = 10000
nb_reviews = len(reviews)
x_data = []
# ---- For all reviews
for review in reviews:
# ---- First index must be <start>
index_review=[1]
# ---- For all words
for w in review.split(' '):
# ---- Clean it
w_clean = re.sub(r"[^a-zA-Z0-9]", "", w)
# ---- Not empty ?
if len(w_clean)>0:
# ---- Get the index
w_index = word_index.get(w,2)
if w_index>vocab_size : w_index=2
# ---- Add the index if < vocab_size
index_review.append(w_index)
# ---- Add the indexed review
x_data.append(index_review)
# ---- Padding
x_data = keras.preprocessing.sequence.pad_sequences(x_data, value = 0, padding = 'post', maxlen = max_len)
```
%% Cell type:markdown id: tags:
### 2.4 - Have a look
%% Cell type:code id: tags:
``` python
def translate(x):
return ' '.join( [index_word.get(i,'?') for i in x] )
for i in range(nb_reviews):
imax=np.where(x_data[i]==0)[0][0]+5
print(f'\nText review :', reviews[i])
print( f'x_train[{i:}] :', list(x_data[i][:imax]), '(...)')
print( 'Translation :', translate(x_data[i][:imax]), '(...)')
```
%% Cell type:markdown id: tags:
## Step 2 - Bring back the model
%% Cell type:code id: tags:
``` python
model = keras.models.load_model('./run/models/best_model.h5')
```
%% Cell type:markdown id: tags:
## Step 4 - Predict
%% Cell type:code id: tags:
``` python
y_pred = model.predict(x_data)
```
%% Cell type:markdown id: tags:
#### And the winner is :
%% Cell type:code id: tags:
``` python
for i in range(nb_reviews):
print(f'\n{reviews[i]:<70} =>',('NEGATIVE' if y_pred[i][0]<0.5 else 'POSITIVE'),f'({y_pred[i][0]:.2f})')
```
%% Cell type:code id: tags:
``` python
a=[1]+[i for i in range(3)]
a
```
%% Cell type:code id: tags:
``` python
```
This diff is collapsed.
This diff is collapsed.
![](fidle/img/00-Fidle-titre-01_m.png)
## A propos
Ce dépot contient l'ensemble des documents et liens de la **formation Fidle**.
Les objectifs de cette formations, co-organisée par la formation continue du CNRS et les réseaux SARI et DEVLOG, sont :
- Comprendre les **bases** des réseaux de neurones profonds (Deep Learning)
- Développer une **première expérience** à travers des exemples simples et représentatifs
- Comprendre les différents types de réseaux, leurs **architectures** et leurs **cas d'usages**
- Appréhender les technologies **Tensorflow/Kera**s et **Jupyter lab**, sur GPU
- Appréhender les **environnements de calcul académiques** tier-2 (méso) et/ou tier-1 (nationaux)
## Disposibles dans ce dépot :
Vous trouverez ici :
- le support des présentations
- l'ensemble des travaux pratiques, sous forme de notebooks Jupyter
- des fiches et informations pratiques :
- **[Configuration SSH](../-/wikis/howto-ssh)**
## Récupération de ce dépot et installation
To run this examples, you need an environment with the following packages :
- Python 3.6
- numpy
- Tensorflow 2.0
- scikit-image
- scikit-learn
- Matplotlib
- seaborn
- pyplot
You can install such a predefined environment :
```
conda env create -f environment.yml
```
To manage conda environment see [there](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#)
## Misc
...
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#!/bin/bash
#OAR -n VAE with CelebA
#OAR -t gpu
#OAR -l /nodes=1/gpudevice=1,walltime=01:00:00
#OAR --stdout _batch/VAE_CelebA_%jobid%.out
#OAR --stderr _batch/VAE_CelebA_%jobid%.err
#OAR --project fidle
#---- For cpu
# use :
# OAR -l /nodes=1/core=32,walltime=01:00:00
# and add a 2>/dev/null to ipython xxx
# -----------------------------------------------
# _ _ _
# | |__ __ _| |_ ___| |__
# | '_ \ / _` | __/ __| '_ \
# | |_) | (_| | || (__| | | |
# |_.__/ \__,_|\__\___|_| |_|
# VAE CelebA at GRICAD
# -----------------------------------------------
#
CONDA_ENV=deeplearning2
RUN_DIR=~/fidle/VAE
RUN_IPYNB=05.1-Batch-01.ipynb
# ---- Cuda Conda initialization
#
echo '------------------------------------------------------------'
echo "Start : $0"
echo '------------------------------------------------------------'
#
source /applis/environments/cuda_env.sh dahu 10.0
source /applis/environments/conda.sh
#
conda activate "$CONDA_ENV"
# ---- Run it...
#
cd $RUN_DIR
jupyter nbconvert --to notebook --execute "$RUN_IPYNB"
#!/bin/bash
#SBATCH --job-name="VAE_bizness" # nom du job
#SBATCH --ntasks=1 # nombre de tâche (un unique processus ici)
#SBATCH --gres=gpu:1 # nombre de GPU à réserver (un unique GPU ici)
#SBATCH --cpus-per-task=10 # nombre de coeurs à réserver (un quart du noeud)
#SBATCH --hint=nomultithread # on réserve des coeurs physiques et non logiques
#SBATCH --time=02:00:00 # temps exécution maximum demande (HH:MM:SS)
#SBATCH --output="_batch/VAE_%j.out" # nom du fichier de sortie
#SBATCH --error="_batch/VAE_%j.err" # nom du fichier d'erreur (ici commun avec la sortie)
#SBATCH --mail-user=Jean-Luc.Parouty@grenoble-inp.fr
#SBATCH --mail-type=ALL
# -----------------------------------------------
# _ _ _
# | |__ __ _| |_ ___| |__
# | '_ \ / _` | __/ __| '_ \
# | |_) | (_| | || (__| | | |
# |_.__/ \__,_|\__\___|_| |_|
# VAE CelebA at IDRIS
# -----------------------------------------------
#
MODULE_ENV="tensorflow-gpu/py3/2.0.0"
RUN_DIR="$WORK/fidle/VAE"
RUN_IPYNB="05.2-Variant.ipynb"
# ---- Welcome...
echo '------------------------------------------------------------'
echo "Start : $0"
echo '------------------------------------------------------------'
echo "Job id : $SLURM_JOB_ID"
echo "Job name : $SLURM_JOB_NAME"
echo "Job node list : $SLURM_JOB_NODELIST"
echo '------------------------------------------------------------'
echo "Notebook : $RUN_IPYNB"
echo "Run in : $RUN_DIR"
echo "With env. : $MODULE_ENV"
echo '------------------------------------------------------------'
# ---- Module
module load $MODULE_ENV
# ---- Run it...
cd $RUN_DIR
jupyter nbconvert --ExecutePreprocessor.timeout=-1 --to notebook --execute "$RUN_IPYNB"
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.