Skip to content
Snippets Groups Projects

Minor change

parent 7fb667b1
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
German Traffic Sign Recognition Benchmark (GTSRB) German Traffic Sign Recognition Benchmark (GTSRB)
================================================= =================================================
--- ---
Introduction au Deep Learning (IDLE) - S. Aria, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020 Introduction au Deep Learning (IDLE) - S. Aria, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020
## Episode 2 : First Convolutions ## Episode 2 : First Convolutions
Our main steps: Our main steps:
- Read dataset - Read dataset
- Build a model - Build a model
- Train the model - Train the model
- Model evaluation - Model evaluation
## 1/ Import and init ## 1/ Import and init
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import tensorflow as tf import tensorflow as tf
from tensorflow import keras from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.callbacks import TensorBoard
import numpy as np import numpy as np
import matplotlib import matplotlib
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import time import time
import idle.pwk as ooo import idle.pwk as ooo
ooo.init() ooo.init()
``` ```
%% Output %% Output
IDLE 2020 - Practical Work Module IDLE 2020 - Practical Work Module
Version : 0.1.1 Version : 0.1.1
Run time : Sunday 5 January 2020, 20:14:03 Run time : Monday 6 January 2020, 14:35:14
Matplotlib style : idle/talk.mplstyle Matplotlib style : idle/talk.mplstyle
TensorFlow version : 2.0.0 TensorFlow version : 2.0.0
Keras version : 2.2.4-tf Keras version : 2.2.4-tf
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 2/ Reload dataset (RGB25) ## 2/ Reload dataset (RGB25)
Dataset is one of the saved dataset: RGB25, RGB35, L25, L35, etc. Dataset is one of the saved dataset: RGB25, RGB35, L25, L35, etc.
First of all, we're going to use the dataset : **L25** First of all, we're going to use the dataset : **L25**
(with a GPU, it only takes 35'' compared to more than 5' with a CPU !) (with a GPU, it only takes 35'' compared to more than 5' with a CPU !)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
%%time %%time
dataset ='RGB25' dataset ='RGB25'
img_lx = 25 img_lx = 25
img_ly = 25 img_ly = 25
img_lz = 3 img_lz = 3
# ---- Read dataset # ---- Read dataset
x_train = np.load('./data/{}/x_train.npy'.format(dataset)) x_train = np.load('./data/{}/x_train.npy'.format(dataset))
y_train = np.load('./data/{}/y_train.npy'.format(dataset)) y_train = np.load('./data/{}/y_train.npy'.format(dataset))
x_test = np.load('./data/{}/x_test.npy'.format(dataset)) x_test = np.load('./data/{}/x_test.npy'.format(dataset))
y_test = np.load('./data/{}/y_test.npy'.format(dataset)) y_test = np.load('./data/{}/y_test.npy'.format(dataset))
# ---- Reshape data # ---- Reshape data
x_train = x_train.reshape( x_train.shape[0], img_lx, img_ly, img_lz) x_train = x_train.reshape( x_train.shape[0], img_lx, img_ly, img_lz)
x_test = x_test.reshape( x_test.shape[0], img_lx, img_ly, img_lz) x_test = x_test.reshape( x_test.shape[0], img_lx, img_ly, img_lz)
input_shape = (img_lx, img_ly, img_lz) input_shape = (img_lx, img_ly, img_lz)
print("Dataset loaded, size={:.1f} Mo\n".format(ooo.get_directory_size('./data/'+dataset))) print("Dataset loaded, size={:.1f} Mo\n".format(ooo.get_directory_size('./data/'+dataset)))
``` ```
%% Output %% Output
Dataset loaded, size=742.0 Mo Dataset loaded, size=742.0 Mo
CPU times: user 0 ns, sys: 824 ms, total: 824 ms CPU times: user 0 ns, sys: 708 ms, total: 708 ms
Wall time: 3.38 s Wall time: 6.07 s
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 3/ Have a look to the dataset ## 3/ Have a look to the dataset
Note: Data must be reshape for matplotlib Note: Data must be reshape for matplotlib
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
print("x_train : ", x_train.shape) print("x_train : ", x_train.shape)
print("y_train : ", y_train.shape) print("y_train : ", y_train.shape)
print("x_test : ", x_test.shape) print("x_test : ", x_test.shape)
print("y_test : ", y_test.shape) print("y_test : ", y_test.shape)
if img_lz>1: if img_lz>1:
ooo.plot_images(x_train.reshape(-1,img_lx,img_ly,img_lz), y_train, range(6), columns=3, x_size=4, y_size=3) ooo.plot_images(x_train.reshape(-1,img_lx,img_ly,img_lz), y_train, range(6), columns=3, x_size=4, y_size=3)
ooo.plot_images(x_train.reshape(-1,img_lx,img_ly,img_lz), y_train, range(36), columns=12, x_size=1, y_size=1) ooo.plot_images(x_train.reshape(-1,img_lx,img_ly,img_lz), y_train, range(36), columns=12, x_size=1, y_size=1)
else: else:
ooo.plot_images(x_train.reshape(-1,img_lx,img_ly), y_train, range(6), columns=6, x_size=2, y_size=2) ooo.plot_images(x_train.reshape(-1,img_lx,img_ly), y_train, range(6), columns=6, x_size=2, y_size=2)
ooo.plot_images(x_train.reshape(-1,img_lx,img_ly), y_train, range(36), columns=12, x_size=1, y_size=1) ooo.plot_images(x_train.reshape(-1,img_lx,img_ly), y_train, range(36), columns=12, x_size=1, y_size=1)
``` ```
%% Output %% Output
x_train : (39209, 25, 25, 3) x_train : (39209, 25, 25, 3)
y_train : (39209,) y_train : (39209,)
x_test : (12630, 25, 25, 3) x_test : (12630, 25, 25, 3)
y_test : (12630,) y_test : (12630,)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 4/ Create model ## 4/ Create model
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
batch_size = 128 batch_size = 128
num_classes = 43 num_classes = 43
epochs = 10 epochs = 5
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
keras.backend.clear_session() keras.backend.clear_session()
model = keras.models.Sequential() model = keras.models.Sequential()
model.add( keras.layers.Conv2D(96, (3,3), activation='relu', input_shape=(img_lx, img_ly, img_lz))) model.add( keras.layers.Conv2D(96, (3,3), activation='relu', input_shape=(img_lx, img_ly, img_lz)))
model.add( keras.layers.MaxPooling2D((2, 2))) model.add( keras.layers.MaxPooling2D((2, 2)))
model.add( keras.layers.Conv2D(192, (3, 3), activation='relu')) model.add( keras.layers.Conv2D(192, (3, 3), activation='relu'))
model.add( keras.layers.MaxPooling2D((2, 2))) model.add( keras.layers.MaxPooling2D((2, 2)))
model.add( keras.layers.Flatten()) model.add( keras.layers.Flatten())
model.add( keras.layers.Dense(3072, activation='relu')) model.add( keras.layers.Dense(3072, activation='relu'))
model.add( keras.layers.Dense(500, activation='relu')) model.add( keras.layers.Dense(500, activation='relu'))
model.add( keras.layers.Dense(43, activation='softmax')) model.add( keras.layers.Dense(43, activation='softmax'))
model.summary() model.summary()
model.compile(optimizer='adam', model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy', loss='sparse_categorical_crossentropy',
metrics=['accuracy']) metrics=['accuracy'])
``` ```
%% Output %% Output
Model: "sequential" Model: "sequential"
_________________________________________________________________ _________________________________________________________________
Layer (type) Output Shape Param # Layer (type) Output Shape Param #
================================================================= =================================================================
conv2d (Conv2D) (None, 23, 23, 96) 2688 conv2d (Conv2D) (None, 23, 23, 96) 2688
_________________________________________________________________ _________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 11, 11, 96) 0 max_pooling2d (MaxPooling2D) (None, 11, 11, 96) 0
_________________________________________________________________ _________________________________________________________________
conv2d_1 (Conv2D) (None, 9, 9, 192) 166080 conv2d_1 (Conv2D) (None, 9, 9, 192) 166080
_________________________________________________________________ _________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 4, 4, 192) 0 max_pooling2d_1 (MaxPooling2 (None, 4, 4, 192) 0
_________________________________________________________________ _________________________________________________________________
flatten (Flatten) (None, 3072) 0 flatten (Flatten) (None, 3072) 0
_________________________________________________________________ _________________________________________________________________
dense (Dense) (None, 3072) 9440256 dense (Dense) (None, 3072) 9440256
_________________________________________________________________ _________________________________________________________________
dense_1 (Dense) (None, 500) 1536500 dense_1 (Dense) (None, 500) 1536500
_________________________________________________________________ _________________________________________________________________
dense_2 (Dense) (None, 43) 21543 dense_2 (Dense) (None, 43) 21543
================================================================= =================================================================
Total params: 11,167,067 Total params: 11,167,067
Trainable params: 11,167,067 Trainable params: 11,167,067
Non-trainable params: 0 Non-trainable params: 0
_________________________________________________________________ _________________________________________________________________
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 5/ Run model ## 5/ Run model
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
%%time %%time
history = model.fit( x_train, y_train, history = model.fit( x_train, y_train,
batch_size=batch_size, batch_size=batch_size,
epochs=epochs, epochs=epochs,
verbose=1, verbose=1,
validation_data=(x_test, y_test)) validation_data=(x_test, y_test))
``` ```
%% Output %% Output
Train on 39209 samples, validate on 12630 samples Train on 39209 samples, validate on 12630 samples
Epoch 1/10 Epoch 1/5
39209/39209 [==============================] - 4s 92us/sample - loss: 0.9183 - accuracy: 0.7400 - val_loss: 0.3855 - val_accuracy: 0.9032 39209/39209 [==============================] - 18s 468us/sample - loss: 0.9595 - accuracy: 0.7357 - val_loss: 0.4068 - val_accuracy: 0.9015
Epoch 2/10 Epoch 2/5
39209/39209 [==============================] - 3s 77us/sample - loss: 0.0895 - accuracy: 0.9770 - val_loss: 0.3091 - val_accuracy: 0.9251 39209/39209 [==============================] - 16s 417us/sample - loss: 0.0876 - accuracy: 0.9770 - val_loss: 0.3472 - val_accuracy: 0.9190
Epoch 3/10 Epoch 3/5
39209/39209 [==============================] - 3s 78us/sample - loss: 0.0400 - accuracy: 0.9895 - val_loss: 0.2527 - val_accuracy: 0.9367 39209/39209 [==============================] - 16s 409us/sample - loss: 0.0375 - accuracy: 0.9900 - val_loss: 0.2917 - val_accuracy: 0.9363
Epoch 4/10 Epoch 4/5
39209/39209 [==============================] - 3s 78us/sample - loss: 0.0278 - accuracy: 0.9922 - val_loss: 0.2833 - val_accuracy: 0.9399 39209/39209 [==============================] - 17s 421us/sample - loss: 0.0263 - accuracy: 0.9928 - val_loss: 0.3384 - val_accuracy: 0.9284
Epoch 5/10 Epoch 5/5
39209/39209 [==============================] - 3s 77us/sample - loss: 0.0182 - accuracy: 0.9952 - val_loss: 0.2724 - val_accuracy: 0.9473 39209/39209 [==============================] - 17s 421us/sample - loss: 0.0237 - accuracy: 0.9929 - val_loss: 0.3022 - val_accuracy: 0.9433
Epoch 6/10 CPU times: user 16min 31s, sys: 1min 27s, total: 17min 59s
39209/39209 [==============================] - 3s 75us/sample - loss: 0.0132 - accuracy: 0.9963 - val_loss: 0.2321 - val_accuracy: 0.9469 Wall time: 1min 23s
Epoch 7/10
39209/39209 [==============================] - 3s 77us/sample - loss: 0.0198 - accuracy: 0.9944 - val_loss: 0.2921 - val_accuracy: 0.9390
Epoch 8/10
39209/39209 [==============================] - 3s 78us/sample - loss: 0.0169 - accuracy: 0.9958 - val_loss: 0.2827 - val_accuracy: 0.9444
Epoch 9/10
39209/39209 [==============================] - 3s 78us/sample - loss: 0.0217 - accuracy: 0.9939 - val_loss: 0.2007 - val_accuracy: 0.9565
Epoch 10/10
39209/39209 [==============================] - 3s 78us/sample - loss: 0.0119 - accuracy: 0.9968 - val_loss: 0.1834 - val_accuracy: 0.9628
CPU times: user 43.7 s, sys: 4.45 s, total: 48.2 s
Wall time: 30.9 s
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 6/ Evaluation ## 6/ Evaluation
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
score = model.evaluate(x_test, y_test, verbose=0) score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss : {:5.4f}'.format(score[0])) print('Test loss : {:5.4f}'.format(score[0]))
print('Test accuracy : {:5.4f}'.format(score[1])) print('Test accuracy : {:5.4f}'.format(score[1]))
``` ```
%% Output %% Output
Test loss : 0.1834 Test loss : 0.3022
Test accuracy : 0.9628 Test accuracy : 0.9433
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
--- ---
### Results : ### Results :
``` ```
L25 : size=250 Mo 93.15% L25 : size=250 Mo 93.15%
RGB25 : size=740 Mo 95.42% RGB25 : size=740 Mo 94.33%
... ...
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
``` ```
......
This diff is collapsed.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment