Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
Fidle
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Package Registry
Model registry
Operate
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Slim Karkar
Fidle
Commits
b32677bc
Commit
b32677bc
authored
5 years ago
by
Soraya Arias
Browse files
Options
Downloads
Patches
Plain Diff
Add run directory creation
parent
79f354d1
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
GTSRB/02-First-convolutions.ipynb
+2
-1
2 additions, 1 deletion
GTSRB/02-First-convolutions.ipynb
with
2 additions
and
1 deletion
GTSRB/02-First-convolutions.ipynb
+
2
−
1
View file @
b32677bc
...
...
@@ -116,7 +116,8 @@
"sys.path.append('..')\n",
"import fidle.pwk as ooo\n",
"\n",
"ooo.init()"
"ooo.init()\n",
"os.makedirs('./run/', mode=0o750, exist_ok=True)"
]
},
{
...
...
%% Cell type:markdown id: tags:
<img
width=
"800px"
src=
"../fidle/img/00-Fidle-header-01.svg"
></img>
# <!-- TITLE --> [GTS2] - CNN with GTSRB dataset - First convolutions
<!-- DESC -->
Episode 2 : First convolutions and first results
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
-
Recognizing traffic signs
-
Understand the
**principles**
and
**architecture**
of a
**convolutional neural network**
for image classification
The German Traffic Sign Recognition Benchmark (GTSRB) is a dataset with more than 50,000 photos of road signs from about 40 classes.
The final aim is to recognise them !
Description is available there : http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset
## What we're going to do :
-
Read H5 dataset
-
Build a model
-
Train the model
-
Evaluate the model
## Step 1 - Import and init
### 1.1 - Python
%% Cell type:code id: tags:
```
python
import
tensorflow
as
tf
from
tensorflow
import
keras
from
tensorflow.keras.callbacks
import
TensorBoard
import
numpy
as
np
import
matplotlib.pyplot
as
plt
import
h5py
import
os
,
time
,
sys
from
importlib
import
reload
sys
.
path
.
append
(
'
..
'
)
import
fidle.pwk
as
ooo
ooo
.
init
()
os
.
makedirs
(
'
./run/
'
,
mode
=
0o750
,
exist_ok
=
True
)
```
%% Output
FIDLE 2020 - Practical Work Module
Version : 0.4.3
Run time : Friday 28 February 2020, 10:25:11
TensorFlow version : 2.0.0
Keras version : 2.2.4-tf
%% Cell type:markdown id: tags:
### 1.2 - Where are we ?
%% Cell type:code id: tags:
```
python
place
,
dataset_dir
=
ooo
.
good_place
(
{
'
GRICAD
'
:
f
'
{
os
.
getenv
(
"
SCRATCH_DIR
"
,
""
)
}
/PROJECTS/pr-fidle/datasets/GTSRB
'
,
'
IDRIS
'
:
f
'
{
os
.
getenv
(
"
WORK
"
,
""
)
}
/datasets/GTSRB
'
,
'
HOME
'
:
f
'
{
os
.
getenv
(
"
HOME
"
,
""
)
}
/datasets/GTSRB
'
}
)
```
%% Output
Well, we should be at GRICAD !
We are going to use: /bettik/PROJECTS/pr-fidle/datasets/GTSRB
%% Cell type:markdown id: tags:
## Step 2 - Load dataset
We're going to retrieve a previously recorded dataset.
For example: set-24x24-L
%% Cell type:code id: tags:
```
python
%%
time
def
read_dataset
(
dataset_dir
,
name
):
'''
Reads h5 dataset from dataset_dir
Args:
dataset_dir : datasets dir
name : dataset name, without .h5
Returns: x_train,y_train,x_test,y_test data
'''
# ---- Read dataset
filename
=
f
'
{
dataset_dir
}
/
{
name
}
.h5
'
with
h5py
.
File
(
filename
,
'
r
'
)
as
f
:
x_train
=
f
[
'
x_train
'
][:]
y_train
=
f
[
'
y_train
'
][:]
x_test
=
f
[
'
x_test
'
][:]
y_test
=
f
[
'
y_test
'
][:]
# ---- done
print
(
'
Dataset
"
{}
"
is loaded. ({:.1f} Mo)
\n
'
.
format
(
name
,
os
.
path
.
getsize
(
filename
)
/
(
1024
*
1024
)))
return
x_train
,
y_train
,
x_test
,
y_test
x_train
,
y_train
,
x_test
,
y_test
=
read_dataset
(
dataset_dir
,
'
set-24x24-L
'
)
```
%% Output
Dataset "set-24x24-L" is loaded. (228.8 Mo)
CPU times: user 16 ms, sys: 128 ms, total: 144 ms
Wall time: 239 ms
%% Cell type:markdown id: tags:
## Step 3 - Have a look to the dataset
We take a quick look as we go by...
%% Cell type:code id: tags:
```
python
print
(
"
x_train :
"
,
x_train
.
shape
)
print
(
"
y_train :
"
,
y_train
.
shape
)
print
(
"
x_test :
"
,
x_test
.
shape
)
print
(
"
y_test :
"
,
y_test
.
shape
)
ooo
.
plot_images
(
x_train
,
y_train
,
range
(
12
),
columns
=
6
,
x_size
=
2
,
y_size
=
2
)
ooo
.
plot_images
(
x_train
,
y_train
,
range
(
36
),
columns
=
12
,
x_size
=
1
,
y_size
=
1
)
```
%% Output
x_train : (39209, 24, 24, 1)
y_train : (39209,)
x_test : (12630, 24, 24, 1)
y_test : (12630,)
%% Cell type:markdown id: tags:
## Step 4 - Create model
We will now build a model and train it...
Some models :
%% Cell type:code id: tags:
```
python
# A basic model
#
def
get_model_v1
(
lx
,
ly
,
lz
):
model
=
keras
.
models
.
Sequential
()
model
.
add
(
keras
.
layers
.
Conv2D
(
96
,
(
3
,
3
),
activation
=
'
relu
'
,
input_shape
=
(
lx
,
ly
,
lz
)))
model
.
add
(
keras
.
layers
.
MaxPooling2D
((
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.2
))
model
.
add
(
keras
.
layers
.
Conv2D
(
192
,
(
3
,
3
),
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
MaxPooling2D
((
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.2
))
model
.
add
(
keras
.
layers
.
Flatten
())
model
.
add
(
keras
.
layers
.
Dense
(
1500
,
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
Dropout
(
0.5
))
model
.
add
(
keras
.
layers
.
Dense
(
43
,
activation
=
'
softmax
'
))
return
model
# A more sophisticated model
#
def
get_model_v2
(
lx
,
ly
,
lz
):
model
=
keras
.
models
.
Sequential
()
model
.
add
(
keras
.
layers
.
Conv2D
(
64
,
(
3
,
3
),
padding
=
'
same
'
,
input_shape
=
(
lx
,
ly
,
lz
),
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
Conv2D
(
64
,
(
3
,
3
),
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
MaxPooling2D
(
pool_size
=
(
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.2
))
model
.
add
(
keras
.
layers
.
Conv2D
(
128
,
(
3
,
3
),
padding
=
'
same
'
,
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
Conv2D
(
128
,
(
3
,
3
),
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
MaxPooling2D
(
pool_size
=
(
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.2
))
model
.
add
(
keras
.
layers
.
Conv2D
(
256
,
(
3
,
3
),
padding
=
'
same
'
,
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
Conv2D
(
256
,
(
3
,
3
),
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
MaxPooling2D
(
pool_size
=
(
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.2
))
model
.
add
(
keras
.
layers
.
Flatten
())
model
.
add
(
keras
.
layers
.
Dense
(
512
,
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
Dropout
(
0.5
))
model
.
add
(
keras
.
layers
.
Dense
(
43
,
activation
=
'
softmax
'
))
return
model
# My sphisticated model, but small and fast
#
def
get_model_v3
(
lx
,
ly
,
lz
):
model
=
keras
.
models
.
Sequential
()
model
.
add
(
keras
.
layers
.
Conv2D
(
32
,
(
3
,
3
),
activation
=
'
relu
'
,
input_shape
=
(
lx
,
ly
,
lz
)))
model
.
add
(
keras
.
layers
.
MaxPooling2D
((
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.5
))
model
.
add
(
keras
.
layers
.
Conv2D
(
64
,
(
3
,
3
),
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
MaxPooling2D
((
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.5
))
model
.
add
(
keras
.
layers
.
Conv2D
(
128
,
(
3
,
3
),
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
MaxPooling2D
((
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.5
))
model
.
add
(
keras
.
layers
.
Conv2D
(
256
,
(
3
,
3
),
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
MaxPooling2D
((
2
,
2
)))
model
.
add
(
keras
.
layers
.
Dropout
(
0.5
))
model
.
add
(
keras
.
layers
.
Flatten
())
model
.
add
(
keras
.
layers
.
Dense
(
1152
,
activation
=
'
relu
'
))
model
.
add
(
keras
.
layers
.
Dropout
(
0.5
))
model
.
add
(
keras
.
layers
.
Dense
(
43
,
activation
=
'
softmax
'
))
return
model
```
%% Cell type:markdown id: tags:
## Step 5 - Train the model
**Get the shape of my data :**
%% Cell type:code id: tags:
```
python
(
n
,
lx
,
ly
,
lz
)
=
x_train
.
shape
print
(
"
Images of the dataset have this folowing shape :
"
,(
lx
,
ly
,
lz
))
```
%% Output
Images of the dataset have this folowing shape : (24, 24, 1)
%% Cell type:markdown id: tags:
**Get and compile a model, with the data shape :**
%% Cell type:code id: tags:
```
python
model
=
get_model_v1
(
lx
,
ly
,
lz
)
model
.
summary
()
img
=
keras
.
utils
.
plot_model
(
model
,
to_file
=
'
./run/model.png
'
,
show_shapes
=
True
,
show_layer_names
=
True
,
dpi
=
72
)
display
(
img
)
model
.
compile
(
optimizer
=
'
adam
'
,
loss
=
'
sparse_categorical_crossentropy
'
,
metrics
=
[
'
accuracy
'
])
```
%% Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 22, 22, 96) 960
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 11, 11, 96) 0
_________________________________________________________________
dropout_6 (Dropout) (None, 11, 11, 96) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 9, 9, 192) 166080
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 4, 4, 192) 0
_________________________________________________________________
dropout_7 (Dropout) (None, 4, 4, 192) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 3072) 0
_________________________________________________________________
dense_4 (Dense) (None, 1500) 4609500
_________________________________________________________________
dropout_8 (Dropout) (None, 1500) 0
_________________________________________________________________
dense_5 (Dense) (None, 43) 64543
=================================================================
Total params: 4,841,083
Trainable params: 4,841,083
Non-trainable params: 0
_________________________________________________________________
%% Cell type:markdown id: tags:
**Train it :**
%% Cell type:code id: tags:
```
python
%%
time
batch_size
=
64
epochs
=
5
# ---- Shuffle train data
x_train
,
y_train
=
ooo
.
shuffle_np_dataset
(
x_train
,
y_train
)
# ---- Train
history
=
model
.
fit
(
x_train
,
y_train
,
batch_size
=
batch_size
,
epochs
=
epochs
,
verbose
=
1
,
validation_data
=
(
x_test
,
y_test
))
```
%% Output
Train on 39209 samples, validate on 12630 samples
Epoch 1/5
39209/39209 [==============================] - 9s 225us/sample - loss: 1.2385 - accuracy: 0.6579 - val_loss: 0.4697 - val_accuracy: 0.8898
Epoch 2/5
39209/39209 [==============================] - 2s 61us/sample - loss: 0.2207 - accuracy: 0.9373 - val_loss: 0.3298 - val_accuracy: 0.9228
Epoch 3/5
39209/39209 [==============================] - 2s 61us/sample - loss: 0.1194 - accuracy: 0.9659 - val_loss: 0.2805 - val_accuracy: 0.9370
Epoch 4/5
39209/39209 [==============================] - 2s 61us/sample - loss: 0.0849 - accuracy: 0.9756 - val_loss: 0.2571 - val_accuracy: 0.9390
Epoch 5/5
39209/39209 [==============================] - 2s 61us/sample - loss: 0.0637 - accuracy: 0.9809 - val_loss: 0.2219 - val_accuracy: 0.9497
CPU times: user 16 s, sys: 2.3 s, total: 18.4 s
Wall time: 18.7 s
%% Cell type:markdown id: tags:
**Evaluate it :**
%% Cell type:code id: tags:
```
python
max_val_accuracy
=
max
(
history
.
history
[
"
val_accuracy
"
])
print
(
"
Max validation accuracy is : {:.4f}
"
.
format
(
max_val_accuracy
))
```
%% Output
Max validation accuracy is : 0.9497
%% Cell type:code id: tags:
```
python
score
=
model
.
evaluate
(
x_test
,
y_test
,
verbose
=
0
)
print
(
'
Test loss : {:5.4f}
'
.
format
(
score
[
0
]))
print
(
'
Test accuracy : {:5.4f}
'
.
format
(
score
[
1
]))
```
%% Output
Test loss : 0.2219
Test accuracy : 0.9497
%% Cell type:markdown id: tags:
<div
class=
"todo"
>
What you can do:
<ul>
<li>
Try the different models
</li>
<li>
Try with different datasets
</li>
<li>
Test different hyperparameters (epochs, batch size, optimization, etc.)
</li>
<li>
Create your own model
</li>
</ul>
</div>
%% Cell type:markdown id: tags:
---
<img
width=
"80px"
src=
"../fidle/img/00-Fidle-logo-01.svg"
></img>
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment