Skip to content
Snippets Groups Projects
Commit 9b1d460b authored by Jean-Luc Parouty's avatar Jean-Luc Parouty
Browse files

Notebooks indexation

Former-commit-id: ab88004f
parent e2986ca5
No related branches found
No related tags found
No related merge requests found
BHPD/00-Fidle-header-01.png

10 KiB

source diff could not be displayed: it is too large. Options to address this: view the blob.
%% Cell type:markdown id: tags:
Deep Neural Network (DNN) - BHPD dataset
========================================
---
Introduction au Deep Learning (IDLE) - S. Arias, E. Maldonado, JL. Parouty - CNRS/SARI/DEVLOG - 2020
## A very simple example of **regression** (Premium edition):
Objective is to predicts **housing prices** from a set of house features.
The **[Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)** consists of price of houses in various places in Boston.
Alongside with price, the dataset also provide information such as Crime, areas of non-retail business in the town,
age of people who own the house and many other attributes...
What we're going to do:
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os,sys
from IPython.display import display, Markdown
from importlib import reload
sys.path.append('..')
import fidle.pwk as ooo
ooo.init()
os.makedirs('./run/models', mode=0o750, exist_ok=True)
```
%% Output
FIDLE 2020 - Practical Work Module
Version : 0.2.8
Run time : Saturday 15 February 2020, 12:32:05
TensorFlow version : 2.0.0
Keras version : 2.2.4-tf
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
### 2.1 - Option 1 : From Keras
Boston housing is a famous historic dataset, so we can get it directly from [Keras datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets)
%% Cell type:raw id: tags:
(x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data(test_split=0.2, seed=113)
%% Cell type:markdown id: tags:
### 2.2 - Option 2 : From a csv file
More fun !
%% Cell type:code id: tags:
``` python
data = pd.read_csv('./data/BostonHousing.csv', header=0)
display(data.head(5).style.format("{0:.2f}"))
print('Données manquantes : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
```
%% Output
Données manquantes : 0 Shape is : (506, 14)
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
### 3.1 - Split data
We will use 80% of the data for training and 20% for validation.
x will be input data and y the expected output
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data_train = data.sample(frac=0.7, axis=0)
data_test = data.drop(data_train.index)
# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('medv', axis=1)
y_train = data_train['medv']
x_test = data_test.drop('medv', axis=1)
y_test = data_test['medv']
print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Output
Original data shape was : (506, 14)
x_train : (354, 13) y_train : (354,)
x_test : (152, 13) y_test : (152,)
%% Cell type:markdown id: tags:
### 3.2 - Data normalization
**Note :**
- All input data must be normalized, train and test.
- To do this we will subtract the mean and divide by the standard deviation.
- But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data.
%% Cell type:code id: tags:
``` python
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
```
%% Output
%% Cell type:markdown id: tags:
## Step 4 - Build a model
More informations about :
- [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers)
- [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
- [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
- [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)
%% Cell type:code id: tags:
``` python
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop',
loss = 'mse',
metrics = ['mae', 'mse'] )
return model
```
%% Cell type:markdown id: tags:
## 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
model=get_model_v1( (13,) )
model.summary()
keras.utils.plot_model( model, to_file='./run/model.png', show_shapes=True, show_layer_names=True, dpi=96)
```
%% Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________
Output (Dense) (None, 1) 65
=================================================================
Total params: 5,121
Trainable params: 5,121
Non-trainable params: 0
_________________________________________________________________
<IPython.core.display.Image object>
%% Cell type:markdown id: tags:
### 5.2 - Add callback
%% Cell type:code id: tags:
``` python
os.makedirs('./run/models', mode=0o750, exist_ok=True)
save_dir = "./run/models/best_model.h5"
savemodel_callback = tf.keras.callbacks.ModelCheckpoint(filepath=save_dir, verbose=0, save_best_only=True)
```
%% Cell type:markdown id: tags:
### 5.3 - Train it
%% Cell type:code id: tags:
``` python
history = model.fit(x_train,
y_train,
epochs = 100,
batch_size = 10,
verbose = 1,
validation_data = (x_test, y_test),
callbacks = [savemodel_callback])
```
%% Output
Train on 354 samples, validate on 152 samples
Epoch 1/100
354/354 [==============================] - 1s 3ms/sample - loss: 446.5069 - mae: 19.1690 - mse: 446.5069 - val_loss: 328.7387 - val_mae: 16.4455 - val_mse: 328.7387
Epoch 2/100
354/354 [==============================] - 0s 301us/sample - loss: 206.7491 - mae: 12.2281 - mse: 206.7491 - val_loss: 102.8150 - val_mae: 8.6449 - val_mse: 102.8150
Epoch 3/100
354/354 [==============================] - 0s 302us/sample - loss: 65.8724 - mae: 6.2331 - mse: 65.8724 - val_loss: 33.7508 - val_mae: 4.5848 - val_mse: 33.7508
Epoch 4/100
354/354 [==============================] - 0s 318us/sample - loss: 33.4179 - mae: 4.2331 - mse: 33.4179 - val_loss: 27.0058 - val_mae: 3.9154 - val_mse: 27.0058
Epoch 5/100
354/354 [==============================] - 0s 312us/sample - loss: 24.9602 - mae: 3.5624 - mse: 24.9602 - val_loss: 23.2470 - val_mae: 3.5429 - val_mse: 23.2470
Epoch 6/100
354/354 [==============================] - 0s 316us/sample - loss: 21.4080 - mae: 3.2530 - mse: 21.4080 - val_loss: 22.1707 - val_mae: 3.4498 - val_mse: 22.1707
Epoch 7/100
354/354 [==============================] - 0s 262us/sample - loss: 18.3586 - mae: 3.0399 - mse: 18.3586 - val_loss: 24.4102 - val_mae: 3.4754 - val_mse: 24.4102
Epoch 8/100
354/354 [==============================] - 0s 307us/sample - loss: 16.9126 - mae: 2.8925 - mse: 16.9126 - val_loss: 20.1919 - val_mae: 3.2138 - val_mse: 20.1919
Epoch 9/100
354/354 [==============================] - 0s 312us/sample - loss: 15.5047 - mae: 2.7532 - mse: 15.5047 - val_loss: 19.0378 - val_mae: 3.0763 - val_mse: 19.0378
Epoch 10/100
354/354 [==============================] - 0s 273us/sample - loss: 14.5763 - mae: 2.6404 - mse: 14.5763 - val_loss: 19.9752 - val_mae: 3.0986 - val_mse: 19.9752
Epoch 11/100
354/354 [==============================] - 0s 310us/sample - loss: 13.5901 - mae: 2.5801 - mse: 13.5901 - val_loss: 18.9675 - val_mae: 3.0192 - val_mse: 18.9675
Epoch 12/100
354/354 [==============================] - 0s 270us/sample - loss: 12.9341 - mae: 2.5158 - mse: 12.9341 - val_loss: 20.6757 - val_mae: 3.1029 - val_mse: 20.6757
Epoch 13/100
354/354 [==============================] - 0s 311us/sample - loss: 12.4520 - mae: 2.5061 - mse: 12.4520 - val_loss: 17.6596 - val_mae: 2.8839 - val_mse: 17.6596
Epoch 14/100
354/354 [==============================] - 0s 311us/sample - loss: 11.9484 - mae: 2.4710 - mse: 11.9484 - val_loss: 16.7645 - val_mae: 2.8083 - val_mse: 16.7645
Epoch 15/100
354/354 [==============================] - 0s 269us/sample - loss: 11.6260 - mae: 2.3959 - mse: 11.6260 - val_loss: 17.5048 - val_mae: 2.8007 - val_mse: 17.5048
Epoch 16/100
354/354 [==============================] - 0s 267us/sample - loss: 11.2504 - mae: 2.3567 - mse: 11.2504 - val_loss: 18.6748 - val_mae: 2.8771 - val_mse: 18.6748
Epoch 17/100
354/354 [==============================] - 0s 269us/sample - loss: 10.8352 - mae: 2.3051 - mse: 10.8352 - val_loss: 19.4796 - val_mae: 3.0041 - val_mse: 19.4796
Epoch 18/100
354/354 [==============================] - 0s 267us/sample - loss: 10.6488 - mae: 2.3377 - mse: 10.6488 - val_loss: 17.0329 - val_mae: 2.7640 - val_mse: 17.0329
Epoch 19/100
354/354 [==============================] - 0s 273us/sample - loss: 10.2134 - mae: 2.2439 - mse: 10.2134 - val_loss: 18.0589 - val_mae: 2.8565 - val_mse: 18.0589
Epoch 20/100
354/354 [==============================] - 0s 315us/sample - loss: 10.1024 - mae: 2.2432 - mse: 10.1024 - val_loss: 16.5968 - val_mae: 2.7402 - val_mse: 16.5968
Epoch 21/100
354/354 [==============================] - 0s 277us/sample - loss: 10.0576 - mae: 2.2401 - mse: 10.0576 - val_loss: 18.4496 - val_mae: 2.8156 - val_mse: 18.4496
Epoch 22/100
354/354 [==============================] - 0s 269us/sample - loss: 9.6590 - mae: 2.1500 - mse: 9.6590 - val_loss: 18.7084 - val_mae: 2.8309 - val_mse: 18.7084
Epoch 23/100
354/354 [==============================] - 0s 277us/sample - loss: 9.4596 - mae: 2.1967 - mse: 9.4596 - val_loss: 18.0308 - val_mae: 2.7595 - val_mse: 18.0308
Epoch 24/100
354/354 [==============================] - 0s 272us/sample - loss: 9.2778 - mae: 2.1680 - mse: 9.2778 - val_loss: 18.9343 - val_mae: 2.9152 - val_mse: 18.9343
Epoch 25/100
354/354 [==============================] - 0s 267us/sample - loss: 9.1075 - mae: 2.1451 - mse: 9.1076 - val_loss: 18.0646 - val_mae: 2.8202 - val_mse: 18.0646
Epoch 26/100
354/354 [==============================] - 0s 273us/sample - loss: 9.2196 - mae: 2.1282 - mse: 9.2196 - val_loss: 18.7244 - val_mae: 2.8288 - val_mse: 18.7244
Epoch 27/100
354/354 [==============================] - 0s 267us/sample - loss: 8.5733 - mae: 2.0703 - mse: 8.5733 - val_loss: 16.9568 - val_mae: 2.8123 - val_mse: 16.9568
Epoch 28/100
354/354 [==============================] - 0s 309us/sample - loss: 8.6252 - mae: 2.0821 - mse: 8.6252 - val_loss: 16.4984 - val_mae: 2.7069 - val_mse: 16.4984
Epoch 29/100
354/354 [==============================] - 0s 307us/sample - loss: 8.6336 - mae: 2.0822 - mse: 8.6336 - val_loss: 16.0498 - val_mae: 2.6532 - val_mse: 16.0498
Epoch 30/100
354/354 [==============================] - 0s 321us/sample - loss: 8.5071 - mae: 2.0379 - mse: 8.5071 - val_loss: 15.1042 - val_mae: 2.6004 - val_mse: 15.1042
Epoch 31/100
354/354 [==============================] - 0s 273us/sample - loss: 8.2888 - mae: 2.0627 - mse: 8.2888 - val_loss: 16.2730 - val_mae: 2.7019 - val_mse: 16.2730
Epoch 32/100
354/354 [==============================] - 0s 271us/sample - loss: 8.2021 - mae: 2.0000 - mse: 8.2021 - val_loss: 17.2852 - val_mae: 2.7962 - val_mse: 17.2852
Epoch 33/100
354/354 [==============================] - 0s 272us/sample - loss: 8.2973 - mae: 2.0336 - mse: 8.2973 - val_loss: 16.8973 - val_mae: 2.7318 - val_mse: 16.8973
Epoch 34/100
354/354 [==============================] - 0s 257us/sample - loss: 8.1033 - mae: 2.0105 - mse: 8.1033 - val_loss: 16.6509 - val_mae: 2.8218 - val_mse: 16.6509
Epoch 35/100
354/354 [==============================] - 0s 272us/sample - loss: 8.0724 - mae: 2.0170 - mse: 8.0724 - val_loss: 16.0802 - val_mae: 2.6733 - val_mse: 16.0802
Epoch 36/100
354/354 [==============================] - 0s 257us/sample - loss: 7.7939 - mae: 1.9606 - mse: 7.7939 - val_loss: 17.1008 - val_mae: 2.7384 - val_mse: 17.1008
Epoch 37/100
354/354 [==============================] - 0s 269us/sample - loss: 7.7812 - mae: 1.9719 - mse: 7.7812 - val_loss: 16.3472 - val_mae: 2.6939 - val_mse: 16.3472
Epoch 38/100
354/354 [==============================] - 0s 276us/sample - loss: 7.4494 - mae: 1.9224 - mse: 7.4494 - val_loss: 19.3916 - val_mae: 2.9414 - val_mse: 19.3916
Epoch 39/100
354/354 [==============================] - 0s 271us/sample - loss: 7.8023 - mae: 1.9978 - mse: 7.8023 - val_loss: 16.3499 - val_mae: 2.7018 - val_mse: 16.3499
Epoch 40/100
354/354 [==============================] - 0s 270us/sample - loss: 7.3681 - mae: 1.9293 - mse: 7.3681 - val_loss: 16.0445 - val_mae: 2.6872 - val_mse: 16.0445
Epoch 41/100
354/354 [==============================] - 0s 267us/sample - loss: 7.3013 - mae: 1.8820 - mse: 7.3013 - val_loss: 16.5657 - val_mae: 2.7222 - val_mse: 16.5657
Epoch 42/100
354/354 [==============================] - 0s 274us/sample - loss: 7.3978 - mae: 1.9154 - mse: 7.3978 - val_loss: 15.9821 - val_mae: 2.6576 - val_mse: 15.9821
Epoch 43/100
354/354 [==============================] - 0s 319us/sample - loss: 6.9832 - mae: 1.9037 - mse: 6.9832 - val_loss: 14.4977 - val_mae: 2.5418 - val_mse: 14.4977
Epoch 44/100
354/354 [==============================] - 0s 269us/sample - loss: 7.2307 - mae: 1.8968 - mse: 7.2307 - val_loss: 15.0962 - val_mae: 2.6188 - val_mse: 15.0962
Epoch 45/100
354/354 [==============================] - 0s 256us/sample - loss: 7.0289 - mae: 1.8685 - mse: 7.0289 - val_loss: 17.0531 - val_mae: 2.8123 - val_mse: 17.0531
Epoch 46/100
354/354 [==============================] - 0s 270us/sample - loss: 6.9010 - mae: 1.8537 - mse: 6.9010 - val_loss: 16.7469 - val_mae: 2.7081 - val_mse: 16.7469
Epoch 47/100
354/354 [==============================] - 0s 268us/sample - loss: 6.9256 - mae: 1.8664 - mse: 6.9256 - val_loss: 16.1227 - val_mae: 2.7760 - val_mse: 16.1227
Epoch 48/100
354/354 [==============================] - 0s 273us/sample - loss: 6.8333 - mae: 1.8552 - mse: 6.8333 - val_loss: 14.9262 - val_mae: 2.6213 - val_mse: 14.9262
Epoch 49/100
354/354 [==============================] - 0s 313us/sample - loss: 6.7351 - mae: 1.8375 - mse: 6.7351 - val_loss: 14.2252 - val_mae: 2.5309 - val_mse: 14.2252
Epoch 50/100
354/354 [==============================] - 0s 276us/sample - loss: 6.6672 - mae: 1.7913 - mse: 6.6672 - val_loss: 16.5652 - val_mae: 2.7693 - val_mse: 16.5652
Epoch 51/100
354/354 [==============================] - 0s 271us/sample - loss: 6.6222 - mae: 1.8325 - mse: 6.6222 - val_loss: 14.8928 - val_mae: 2.5921 - val_mse: 14.8928
Epoch 52/100
354/354 [==============================] - 0s 271us/sample - loss: 6.5606 - mae: 1.8150 - mse: 6.5606 - val_loss: 14.7382 - val_mae: 2.6124 - val_mse: 14.7382
Epoch 53/100
354/354 [==============================] - 0s 273us/sample - loss: 6.5737 - mae: 1.7757 - mse: 6.5737 - val_loss: 14.8866 - val_mae: 2.6357 - val_mse: 14.8866
Epoch 54/100
354/354 [==============================] - 0s 264us/sample - loss: 6.3009 - mae: 1.7569 - mse: 6.3009 - val_loss: 14.6100 - val_mae: 2.6115 - val_mse: 14.6100
Epoch 55/100
354/354 [==============================] - 0s 272us/sample - loss: 6.2524 - mae: 1.7679 - mse: 6.2524 - val_loss: 17.4939 - val_mae: 2.8652 - val_mse: 17.4939
Epoch 56/100
354/354 [==============================] - 0s 319us/sample - loss: 6.2461 - mae: 1.7830 - mse: 6.2461 - val_loss: 14.0397 - val_mae: 2.5829 - val_mse: 14.0397
Epoch 57/100
354/354 [==============================] - 0s 267us/sample - loss: 6.3124 - mae: 1.7788 - mse: 6.3124 - val_loss: 15.4946 - val_mae: 2.7133 - val_mse: 15.4946
Epoch 58/100
354/354 [==============================] - 0s 269us/sample - loss: 6.1133 - mae: 1.7282 - mse: 6.1133 - val_loss: 14.5244 - val_mae: 2.5982 - val_mse: 14.5244
Epoch 59/100
354/354 [==============================] - 0s 259us/sample - loss: 6.2866 - mae: 1.7860 - mse: 6.2866 - val_loss: 15.8915 - val_mae: 2.7331 - val_mse: 15.8915
Epoch 60/100
354/354 [==============================] - 0s 311us/sample - loss: 5.9945 - mae: 1.7178 - mse: 5.9945 - val_loss: 13.2656 - val_mae: 2.5189 - val_mse: 13.2656
Epoch 61/100
354/354 [==============================] - 0s 263us/sample - loss: 6.0649 - mae: 1.7064 - mse: 6.0649 - val_loss: 15.4134 - val_mae: 2.7351 - val_mse: 15.4134
Epoch 62/100
354/354 [==============================] - 0s 268us/sample - loss: 5.9954 - mae: 1.6767 - mse: 5.9954 - val_loss: 13.8741 - val_mae: 2.5721 - val_mse: 13.8741
Epoch 63/100
354/354 [==============================] - 0s 254us/sample - loss: 5.9648 - mae: 1.7023 - mse: 5.9648 - val_loss: 15.1974 - val_mae: 2.6602 - val_mse: 15.1974
Epoch 64/100
354/354 [==============================] - 0s 272us/sample - loss: 5.7276 - mae: 1.7202 - mse: 5.7276 - val_loss: 14.5766 - val_mae: 2.6508 - val_mse: 14.5766
Epoch 65/100
354/354 [==============================] - 0s 266us/sample - loss: 5.8443 - mae: 1.6907 - mse: 5.8443 - val_loss: 15.5797 - val_mae: 2.6848 - val_mse: 15.5797
Epoch 66/100
354/354 [==============================] - 0s 273us/sample - loss: 5.8195 - mae: 1.7295 - mse: 5.8195 - val_loss: 14.5484 - val_mae: 2.6527 - val_mse: 14.5484
Epoch 67/100
354/354 [==============================] - 0s 266us/sample - loss: 5.8216 - mae: 1.6966 - mse: 5.8216 - val_loss: 14.3616 - val_mae: 2.5733 - val_mse: 14.3616
Epoch 68/100
354/354 [==============================] - 0s 271us/sample - loss: 5.6572 - mae: 1.6543 - mse: 5.6572 - val_loss: 16.1438 - val_mae: 2.8151 - val_mse: 16.1438
Epoch 69/100
354/354 [==============================] - 0s 259us/sample - loss: 5.5142 - mae: 1.6657 - mse: 5.5142 - val_loss: 14.2295 - val_mae: 2.5796 - val_mse: 14.2295
Epoch 70/100
354/354 [==============================] - 0s 273us/sample - loss: 5.4965 - mae: 1.6313 - mse: 5.4965 - val_loss: 15.2662 - val_mae: 2.6980 - val_mse: 15.2662
Epoch 71/100
354/354 [==============================] - 0s 270us/sample - loss: 5.4534 - mae: 1.6717 - mse: 5.4534 - val_loss: 14.5025 - val_mae: 2.6441 - val_mse: 14.5025
Epoch 72/100
354/354 [==============================] - 0s 253us/sample - loss: 5.5146 - mae: 1.6526 - mse: 5.5146 - val_loss: 13.7906 - val_mae: 2.5753 - val_mse: 13.7906
Epoch 73/100
354/354 [==============================] - 0s 272us/sample - loss: 5.4499 - mae: 1.6130 - mse: 5.4499 - val_loss: 15.1649 - val_mae: 2.7624 - val_mse: 15.1649
Epoch 74/100
354/354 [==============================] - 0s 309us/sample - loss: 5.3808 - mae: 1.6297 - mse: 5.3808 - val_loss: 12.9326 - val_mae: 2.5007 - val_mse: 12.9326
Epoch 75/100
354/354 [==============================] - 0s 258us/sample - loss: 5.3546 - mae: 1.6313 - mse: 5.3546 - val_loss: 13.6397 - val_mae: 2.5810 - val_mse: 13.6397
Epoch 76/100
354/354 [==============================] - 0s 265us/sample - loss: 5.1666 - mae: 1.5998 - mse: 5.1666 - val_loss: 15.6069 - val_mae: 2.7630 - val_mse: 15.6069
Epoch 77/100
354/354 [==============================] - 0s 272us/sample - loss: 5.2465 - mae: 1.6192 - mse: 5.2465 - val_loss: 14.8084 - val_mae: 2.6388 - val_mse: 14.8084
Epoch 78/100
354/354 [==============================] - 0s 265us/sample - loss: 5.1107 - mae: 1.5772 - mse: 5.1107 - val_loss: 13.6319 - val_mae: 2.5756 - val_mse: 13.6319
Epoch 79/100
354/354 [==============================] - 0s 272us/sample - loss: 5.2677 - mae: 1.5989 - mse: 5.2677 - val_loss: 15.0306 - val_mae: 2.7715 - val_mse: 15.0306
Epoch 80/100
354/354 [==============================] - 0s 274us/sample - loss: 5.0534 - mae: 1.5504 - mse: 5.0534 - val_loss: 13.3917 - val_mae: 2.5352 - val_mse: 13.3917
Epoch 81/100
354/354 [==============================] - 0s 272us/sample - loss: 5.1013 - mae: 1.5826 - mse: 5.1013 - val_loss: 14.6761 - val_mae: 2.7158 - val_mse: 14.6761
Epoch 82/100
354/354 [==============================] - 0s 258us/sample - loss: 5.1137 - mae: 1.5984 - mse: 5.1137 - val_loss: 14.7063 - val_mae: 2.6576 - val_mse: 14.7063
Epoch 83/100
354/354 [==============================] - 0s 269us/sample - loss: 4.9343 - mae: 1.5545 - mse: 4.9343 - val_loss: 13.6205 - val_mae: 2.5494 - val_mse: 13.6205
Epoch 84/100
354/354 [==============================] - 0s 277us/sample - loss: 4.9839 - mae: 1.5815 - mse: 4.9839 - val_loss: 13.3857 - val_mae: 2.6047 - val_mse: 13.3857
Epoch 85/100
354/354 [==============================] - 0s 277us/sample - loss: 4.9946 - mae: 1.5818 - mse: 4.9946 - val_loss: 14.1012 - val_mae: 2.6176 - val_mse: 14.1012
Epoch 86/100
354/354 [==============================] - 0s 273us/sample - loss: 4.7884 - mae: 1.5321 - mse: 4.7884 - val_loss: 14.5182 - val_mae: 2.6687 - val_mse: 14.5182
Epoch 87/100
354/354 [==============================] - 0s 311us/sample - loss: 4.8134 - mae: 1.5660 - mse: 4.8134 - val_loss: 12.7966 - val_mae: 2.5734 - val_mse: 12.7966
Epoch 88/100
354/354 [==============================] - 0s 273us/sample - loss: 4.7923 - mae: 1.5483 - mse: 4.7923 - val_loss: 14.4001 - val_mae: 2.6707 - val_mse: 14.4001
Epoch 89/100
354/354 [==============================] - 0s 274us/sample - loss: 4.6705 - mae: 1.5086 - mse: 4.6705 - val_loss: 15.3677 - val_mae: 2.7359 - val_mse: 15.3677
Epoch 90/100
354/354 [==============================] - 0s 280us/sample - loss: 4.8776 - mae: 1.5806 - mse: 4.8776 - val_loss: 14.4442 - val_mae: 2.6343 - val_mse: 14.4442
Epoch 91/100
354/354 [==============================] - 0s 260us/sample - loss: 4.6349 - mae: 1.5300 - mse: 4.6349 - val_loss: 14.2969 - val_mae: 2.7718 - val_mse: 14.2969
Epoch 92/100
354/354 [==============================] - 0s 273us/sample - loss: 4.7835 - mae: 1.5637 - mse: 4.7835 - val_loss: 13.1123 - val_mae: 2.5578 - val_mse: 13.1123
Epoch 93/100
354/354 [==============================] - 0s 277us/sample - loss: 4.6759 - mae: 1.5259 - mse: 4.6759 - val_loss: 14.3508 - val_mae: 2.6888 - val_mse: 14.3507
Epoch 94/100
354/354 [==============================] - 0s 273us/sample - loss: 4.7856 - mae: 1.5560 - mse: 4.7856 - val_loss: 14.5237 - val_mae: 2.6956 - val_mse: 14.5237
Epoch 95/100
354/354 [==============================] - 0s 313us/sample - loss: 4.7038 - mae: 1.5331 - mse: 4.7038 - val_loss: 12.7707 - val_mae: 2.5393 - val_mse: 12.7707
Epoch 96/100
354/354 [==============================] - 0s 277us/sample - loss: 4.6006 - mae: 1.5331 - mse: 4.6006 - val_loss: 13.8540 - val_mae: 2.6720 - val_mse: 13.8540
Epoch 97/100
354/354 [==============================] - 0s 269us/sample - loss: 4.4720 - mae: 1.4912 - mse: 4.4720 - val_loss: 13.1524 - val_mae: 2.6311 - val_mse: 13.1524
Epoch 98/100
354/354 [==============================] - 0s 309us/sample - loss: 4.4242 - mae: 1.4854 - mse: 4.4242 - val_loss: 11.7020 - val_mae: 2.4886 - val_mse: 11.7020
Epoch 99/100
354/354 [==============================] - 0s 280us/sample - loss: 4.5642 - mae: 1.4920 - mse: 4.5642 - val_loss: 12.6523 - val_mae: 2.5232 - val_mse: 12.6523
Epoch 100/100
354/354 [==============================] - 0s 274us/sample - loss: 4.1971 - mae: 1.4564 - mse: 4.1971 - val_loss: 18.7164 - val_mae: 3.0774 - val_mse: 18.7164
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Output
x_test / loss : 18.7164
x_test / mae : 3.0774
x_test / mse : 18.7164
%% Cell type:markdown id: tags:
### 6.2 - Training history
What was the best result during our training ?
%% Cell type:code id: tags:
``` python
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
```
%% Output
min( val_mae ) : 2.4886
%% Cell type:code id: tags:
``` python
ooo.plot_history(history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']})
```
%% Output
%% Cell type:markdown id: tags:
## Step 7 - Restore a model :
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
loaded_model = tf.keras.models.load_model('./run/models/best_model.h5')
loaded_model.summary()
print("Loaded.")
```
%% Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________
Output (Dense) (None, 1) 65
=================================================================
Total params: 5,121
Trainable params: 5,121
Non-trainable params: 0
_________________________________________________________________
Loaded.
%% Cell type:markdown id: tags:
### 7.2 - Evaluate it :
%% Cell type:code id: tags:
``` python
score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Output
x_test / loss : 11.7020
x_test / mae : 2.4886
x_test / mse : 11.7020
%% Cell type:markdown id: tags:
### 7.3 - Make a prediction
%% Cell type:code id: tags:
``` python
mon_test=[-0.20113196, -0.48631663, 1.23572348, -0.26929877, 2.67879106,
-0.89623587, 1.09961251, -1.05826704, -0.55823117, -0.06159088,
-1.76085159, -1.97039608, 0.52775666]
mon_test=np.array(mon_test).reshape(1,13)
```
%% Cell type:code id: tags:
``` python
predictions = loaded_model.predict( mon_test )
print("Prédiction : {:.2f} K$ Reality : {:.2f} K$".format(predictions[0][0], y_train[13]))
```
%% Output
Prédiction : 16.20 K$ Reality : 21.70 K$
%% Cell type:markdown id: tags:
-----
That's all folks !
%% Cell type:markdown id: tags:
![Fidle](../fidle/img/00-Fidle-header-01.png)
# <!-- TITLE --> Regression with a Dense Network (DNN) - Advanced code
<!-- DESC --> More advanced example of DNN network code - BHPD dataset
## Objectives :
- Predicts **housing prices** from a set of house features.
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The **[Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)** consists of price of houses in various places in Boston.
Alongside with price, the dataset also provide information such as Crime, areas of non-retail business in the town,
age of people who own the house and many other attributes...
What we're going to do:
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os,sys
from IPython.display import display, Markdown
from importlib import reload
sys.path.append('..')
import fidle.pwk as ooo
ooo.init()
os.makedirs('./run/models', mode=0o750, exist_ok=True)
```
%% Output
FIDLE 2020 - Practical Work Module
Version : 0.2.9
Run time : Wednesday 19 February 2020, 10:13:01
TensorFlow version : 2.0.0
Keras version : 2.2.4-tf
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
### 2.1 - Option 1 : From Keras
Boston housing is a famous historic dataset, so we can get it directly from [Keras datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets)
%% Cell type:raw id: tags:
(x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data(test_split=0.2, seed=113)
%% Cell type:markdown id: tags:
### 2.2 - Option 2 : From a csv file
More fun !
%% Cell type:code id: tags:
``` python
data = pd.read_csv('./data/BostonHousing.csv', header=0)
display(data.head(5).style.format("{0:.2f}"))
print('Données manquantes : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
```
%% Output
Données manquantes : 0 Shape is : (506, 14)
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
### 3.1 - Split data
We will use 80% of the data for training and 20% for validation.
x will be input data and y the expected output
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data_train = data.sample(frac=0.7, axis=0)
data_test = data.drop(data_train.index)
# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('medv', axis=1)
y_train = data_train['medv']
x_test = data_test.drop('medv', axis=1)
y_test = data_test['medv']
print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Output
Original data shape was : (506, 14)
x_train : (354, 13) y_train : (354,)
x_test : (152, 13) y_test : (152,)
%% Cell type:markdown id: tags:
### 3.2 - Data normalization
**Note :**
- All input data must be normalized, train and test.
- To do this we will subtract the mean and divide by the standard deviation.
- But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data.
%% Cell type:code id: tags:
``` python
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
```
%% Output
%% Cell type:markdown id: tags:
## Step 4 - Build a model
More informations about :
- [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers)
- [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
- [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
- [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)
%% Cell type:code id: tags:
``` python
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop',
loss = 'mse',
metrics = ['mae', 'mse'] )
return model
```
%% Cell type:markdown id: tags:
## 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
model=get_model_v1( (13,) )
model.summary()
keras.utils.plot_model( model, to_file='./run/model.png', show_shapes=True, show_layer_names=True, dpi=96)
```
%% Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________
Output (Dense) (None, 1) 65
=================================================================
Total params: 5,121
Trainable params: 5,121
Non-trainable params: 0
_________________________________________________________________
<IPython.core.display.Image object>
%% Cell type:markdown id: tags:
### 5.2 - Add callback
%% Cell type:code id: tags:
``` python
os.makedirs('./run/models', mode=0o750, exist_ok=True)
save_dir = "./run/models/best_model.h5"
savemodel_callback = tf.keras.callbacks.ModelCheckpoint(filepath=save_dir, verbose=0, save_best_only=True)
```
%% Cell type:markdown id: tags:
### 5.3 - Train it
%% Cell type:code id: tags:
``` python
history = model.fit(x_train,
y_train,
epochs = 100,
batch_size = 10,
verbose = 1,
validation_data = (x_test, y_test),
callbacks = [savemodel_callback])
```
%% Output
Train on 354 samples, validate on 152 samples
Epoch 1/100
354/354 [==============================] - 1s 2ms/sample - loss: 483.2389 - mae: 20.0008 - mse: 483.2388 - val_loss: 421.2562 - val_mae: 18.2848 - val_mse: 421.2561
Epoch 2/100
354/354 [==============================] - 0s 232us/sample - loss: 270.7346 - mae: 13.9655 - mse: 270.7346 - val_loss: 187.7437 - val_mae: 10.9696 - val_mse: 187.7437
Epoch 3/100
354/354 [==============================] - 0s 223us/sample - loss: 108.5703 - mae: 7.5648 - mse: 108.5703 - val_loss: 70.6387 - val_mae: 6.1047 - val_mse: 70.6387
Epoch 4/100
354/354 [==============================] - 0s 234us/sample - loss: 53.8803 - mae: 5.1135 - mse: 53.8803 - val_loss: 40.0765 - val_mae: 4.6628 - val_mse: 40.0765
Epoch 5/100
354/354 [==============================] - 0s 223us/sample - loss: 34.2413 - mae: 4.1321 - mse: 34.2413 - val_loss: 29.1298 - val_mae: 3.8690 - val_mse: 29.1298
Epoch 6/100
354/354 [==============================] - 0s 226us/sample - loss: 24.9834 - mae: 3.4851 - mse: 24.9834 - val_loss: 22.8731 - val_mae: 3.3820 - val_mse: 22.8731
Epoch 7/100
354/354 [==============================] - 0s 228us/sample - loss: 21.2207 - mae: 3.2139 - mse: 21.2207 - val_loss: 20.9766 - val_mae: 3.2391 - val_mse: 20.9766
Epoch 8/100
354/354 [==============================] - 0s 223us/sample - loss: 19.2641 - mae: 3.0025 - mse: 19.2641 - val_loss: 19.5046 - val_mae: 3.0795 - val_mse: 19.5046
Epoch 9/100
354/354 [==============================] - 0s 220us/sample - loss: 17.8432 - mae: 2.8878 - mse: 17.8432 - val_loss: 18.3068 - val_mae: 3.0834 - val_mse: 18.3068
Epoch 10/100
354/354 [==============================] - 0s 224us/sample - loss: 16.7673 - mae: 2.7365 - mse: 16.7673 - val_loss: 17.4260 - val_mae: 3.0035 - val_mse: 17.4260
Epoch 11/100
354/354 [==============================] - 0s 225us/sample - loss: 15.6927 - mae: 2.6873 - mse: 15.6927 - val_loss: 17.3096 - val_mae: 3.0492 - val_mse: 17.3096
Epoch 12/100
354/354 [==============================] - 0s 222us/sample - loss: 15.4113 - mae: 2.6274 - mse: 15.4113 - val_loss: 15.7095 - val_mae: 2.8104 - val_mse: 15.7095
Epoch 13/100
354/354 [==============================] - 0s 220us/sample - loss: 14.7243 - mae: 2.5393 - mse: 14.7243 - val_loss: 15.6497 - val_mae: 2.9052 - val_mse: 15.6497
Epoch 14/100
354/354 [==============================] - 0s 228us/sample - loss: 14.2611 - mae: 2.5371 - mse: 14.2611 - val_loss: 14.9650 - val_mae: 2.8165 - val_mse: 14.9650
Epoch 15/100
354/354 [==============================] - 0s 222us/sample - loss: 14.0530 - mae: 2.5289 - mse: 14.0530 - val_loss: 14.8840 - val_mae: 2.8196 - val_mse: 14.8840
Epoch 16/100
354/354 [==============================] - 0s 224us/sample - loss: 13.3820 - mae: 2.4568 - mse: 13.3820 - val_loss: 13.7568 - val_mae: 2.6754 - val_mse: 13.7568
Epoch 17/100
354/354 [==============================] - 0s 218us/sample - loss: 13.2232 - mae: 2.4318 - mse: 13.2232 - val_loss: 13.6934 - val_mae: 2.6355 - val_mse: 13.6934
Epoch 18/100
354/354 [==============================] - 0s 183us/sample - loss: 12.8038 - mae: 2.3743 - mse: 12.8038 - val_loss: 13.7276 - val_mae: 2.6466 - val_mse: 13.7276
Epoch 19/100
354/354 [==============================] - 0s 223us/sample - loss: 12.4826 - mae: 2.3804 - mse: 12.4826 - val_loss: 13.0037 - val_mae: 2.5279 - val_mse: 13.0037
Epoch 20/100
354/354 [==============================] - 0s 222us/sample - loss: 12.2345 - mae: 2.3264 - mse: 12.2345 - val_loss: 12.8911 - val_mae: 2.5583 - val_mse: 12.8911
Epoch 21/100
354/354 [==============================] - 0s 231us/sample - loss: 12.0720 - mae: 2.3410 - mse: 12.0720 - val_loss: 12.5983 - val_mae: 2.5747 - val_mse: 12.5983
Epoch 22/100
354/354 [==============================] - 0s 224us/sample - loss: 11.7805 - mae: 2.2897 - mse: 11.7805 - val_loss: 12.1645 - val_mae: 2.5094 - val_mse: 12.1645
Epoch 23/100
354/354 [==============================] - 0s 174us/sample - loss: 11.4012 - mae: 2.2581 - mse: 11.4012 - val_loss: 13.6673 - val_mae: 2.7201 - val_mse: 13.6673
Epoch 24/100
354/354 [==============================] - 0s 227us/sample - loss: 11.2741 - mae: 2.2712 - mse: 11.2741 - val_loss: 11.6918 - val_mae: 2.4039 - val_mse: 11.6918
Epoch 25/100
354/354 [==============================] - 0s 179us/sample - loss: 11.2056 - mae: 2.2226 - mse: 11.2056 - val_loss: 12.3935 - val_mae: 2.6021 - val_mse: 12.3935
Epoch 26/100
354/354 [==============================] - 0s 173us/sample - loss: 10.8629 - mae: 2.2289 - mse: 10.8629 - val_loss: 11.9155 - val_mae: 2.3744 - val_mse: 11.9155
Epoch 27/100
354/354 [==============================] - 0s 218us/sample - loss: 11.0500 - mae: 2.2151 - mse: 11.0500 - val_loss: 11.2193 - val_mae: 2.3695 - val_mse: 11.2193
Epoch 28/100
354/354 [==============================] - 0s 180us/sample - loss: 10.4915 - mae: 2.1578 - mse: 10.4915 - val_loss: 11.9919 - val_mae: 2.5344 - val_mse: 11.9919
Epoch 29/100
354/354 [==============================] - 0s 182us/sample - loss: 10.5519 - mae: 2.1307 - mse: 10.5519 - val_loss: 11.3573 - val_mae: 2.4664 - val_mse: 11.3573
Epoch 30/100
354/354 [==============================] - 0s 170us/sample - loss: 10.0504 - mae: 2.1281 - mse: 10.0504 - val_loss: 11.7304 - val_mae: 2.5102 - val_mse: 11.7304
Epoch 31/100
354/354 [==============================] - 0s 216us/sample - loss: 9.8992 - mae: 2.1397 - mse: 9.8992 - val_loss: 10.9137 - val_mae: 2.3602 - val_mse: 10.9137
Epoch 32/100
354/354 [==============================] - 0s 175us/sample - loss: 9.9473 - mae: 2.0665 - mse: 9.9473 - val_loss: 11.1929 - val_mae: 2.4503 - val_mse: 11.1929
Epoch 33/100
354/354 [==============================] - 0s 168us/sample - loss: 9.6057 - mae: 2.0609 - mse: 9.6057 - val_loss: 11.5105 - val_mae: 2.4419 - val_mse: 11.5105
Epoch 34/100
354/354 [==============================] - 0s 178us/sample - loss: 9.6783 - mae: 2.0484 - mse: 9.6783 - val_loss: 11.0130 - val_mae: 2.4072 - val_mse: 11.0130
Epoch 35/100
354/354 [==============================] - 0s 211us/sample - loss: 9.3834 - mae: 2.0337 - mse: 9.3834 - val_loss: 10.8769 - val_mae: 2.3960 - val_mse: 10.8769
Epoch 36/100
354/354 [==============================] - 0s 222us/sample - loss: 9.4563 - mae: 2.0349 - mse: 9.4563 - val_loss: 10.7918 - val_mae: 2.4397 - val_mse: 10.7918
Epoch 37/100
354/354 [==============================] - 0s 223us/sample - loss: 9.4023 - mae: 2.0246 - mse: 9.4023 - val_loss: 10.4927 - val_mae: 2.3926 - val_mse: 10.4927
Epoch 38/100
354/354 [==============================] - 0s 175us/sample - loss: 8.9702 - mae: 2.0006 - mse: 8.9702 - val_loss: 10.9715 - val_mae: 2.4245 - val_mse: 10.9715
Epoch 39/100
354/354 [==============================] - 0s 174us/sample - loss: 9.0225 - mae: 2.0207 - mse: 9.0225 - val_loss: 10.9499 - val_mae: 2.4785 - val_mse: 10.9499
Epoch 40/100
354/354 [==============================] - 0s 177us/sample - loss: 8.8586 - mae: 1.9994 - mse: 8.8586 - val_loss: 10.5540 - val_mae: 2.3401 - val_mse: 10.5540
Epoch 41/100
354/354 [==============================] - 0s 214us/sample - loss: 8.7666 - mae: 1.9705 - mse: 8.7666 - val_loss: 10.3300 - val_mae: 2.3298 - val_mse: 10.3300
Epoch 42/100
354/354 [==============================] - 0s 177us/sample - loss: 8.4090 - mae: 1.9556 - mse: 8.4090 - val_loss: 11.9413 - val_mae: 2.5568 - val_mse: 11.9413
Epoch 43/100
354/354 [==============================] - 0s 216us/sample - loss: 8.4974 - mae: 1.9809 - mse: 8.4974 - val_loss: 10.2694 - val_mae: 2.2804 - val_mse: 10.2694
Epoch 44/100
354/354 [==============================] - 0s 179us/sample - loss: 8.4512 - mae: 1.9371 - mse: 8.4512 - val_loss: 10.6134 - val_mae: 2.3782 - val_mse: 10.6134
Epoch 45/100
354/354 [==============================] - 0s 168us/sample - loss: 8.3356 - mae: 1.9116 - mse: 8.3356 - val_loss: 10.5007 - val_mae: 2.3672 - val_mse: 10.5007
Epoch 46/100
354/354 [==============================] - 0s 220us/sample - loss: 8.0746 - mae: 1.9163 - mse: 8.0746 - val_loss: 9.9081 - val_mae: 2.1968 - val_mse: 9.9081
Epoch 47/100
354/354 [==============================] - 0s 183us/sample - loss: 8.2374 - mae: 1.9080 - mse: 8.2374 - val_loss: 10.2771 - val_mae: 2.3529 - val_mse: 10.2771
Epoch 48/100
354/354 [==============================] - 0s 216us/sample - loss: 8.0765 - mae: 1.9000 - mse: 8.0765 - val_loss: 9.7120 - val_mae: 2.1879 - val_mse: 9.7120
Epoch 49/100
354/354 [==============================] - 0s 163us/sample - loss: 7.7848 - mae: 1.8825 - mse: 7.7848 - val_loss: 10.2084 - val_mae: 2.2360 - val_mse: 10.2084
Epoch 50/100
354/354 [==============================] - 0s 178us/sample - loss: 7.5973 - mae: 1.8669 - mse: 7.5973 - val_loss: 10.1582 - val_mae: 2.2808 - val_mse: 10.1582
Epoch 51/100
354/354 [==============================] - 0s 168us/sample - loss: 7.8596 - mae: 1.9102 - mse: 7.8596 - val_loss: 9.9785 - val_mae: 2.3041 - val_mse: 9.9785
Epoch 52/100
354/354 [==============================] - 0s 172us/sample - loss: 7.5027 - mae: 1.8527 - mse: 7.5027 - val_loss: 10.2315 - val_mae: 2.3614 - val_mse: 10.2315
Epoch 53/100
354/354 [==============================] - 0s 174us/sample - loss: 7.3160 - mae: 1.8556 - mse: 7.3160 - val_loss: 10.7149 - val_mae: 2.4225 - val_mse: 10.7149
Epoch 54/100
354/354 [==============================] - 0s 178us/sample - loss: 7.4478 - mae: 1.8692 - mse: 7.4478 - val_loss: 13.1244 - val_mae: 2.7923 - val_mse: 13.1244
Epoch 55/100
354/354 [==============================] - 0s 222us/sample - loss: 7.2579 - mae: 1.8375 - mse: 7.2579 - val_loss: 9.4053 - val_mae: 2.1927 - val_mse: 9.4053
Epoch 56/100
354/354 [==============================] - 0s 178us/sample - loss: 7.3045 - mae: 1.8785 - mse: 7.3045 - val_loss: 10.3231 - val_mae: 2.4311 - val_mse: 10.3231
Epoch 57/100
354/354 [==============================] - 0s 168us/sample - loss: 6.8708 - mae: 1.8047 - mse: 6.8708 - val_loss: 11.3678 - val_mae: 2.6010 - val_mse: 11.3678
Epoch 58/100
354/354 [==============================] - 0s 180us/sample - loss: 6.9471 - mae: 1.8179 - mse: 6.9471 - val_loss: 10.2855 - val_mae: 2.3937 - val_mse: 10.2855
Epoch 59/100
354/354 [==============================] - 0s 217us/sample - loss: 6.8858 - mae: 1.7987 - mse: 6.8858 - val_loss: 9.1795 - val_mae: 2.1552 - val_mse: 9.1795
Epoch 60/100
354/354 [==============================] - 0s 179us/sample - loss: 6.8982 - mae: 1.7783 - mse: 6.8982 - val_loss: 10.0291 - val_mae: 2.3000 - val_mse: 10.0291
Epoch 61/100
354/354 [==============================] - 0s 168us/sample - loss: 6.8502 - mae: 1.7688 - mse: 6.8502 - val_loss: 9.5141 - val_mae: 2.2370 - val_mse: 9.5141
Epoch 62/100
354/354 [==============================] - 0s 173us/sample - loss: 6.6801 - mae: 1.7737 - mse: 6.6801 - val_loss: 9.6853 - val_mae: 2.2719 - val_mse: 9.6853
Epoch 63/100
354/354 [==============================] - 0s 178us/sample - loss: 6.5468 - mae: 1.7479 - mse: 6.5468 - val_loss: 9.5858 - val_mae: 2.2346 - val_mse: 9.5858
Epoch 64/100
354/354 [==============================] - 0s 172us/sample - loss: 6.3406 - mae: 1.6985 - mse: 6.3406 - val_loss: 9.8893 - val_mae: 2.2439 - val_mse: 9.8893
Epoch 65/100
354/354 [==============================] - 0s 177us/sample - loss: 6.4070 - mae: 1.7780 - mse: 6.4071 - val_loss: 10.4085 - val_mae: 2.3908 - val_mse: 10.4085
Epoch 66/100
354/354 [==============================] - 0s 170us/sample - loss: 6.4227 - mae: 1.7042 - mse: 6.4227 - val_loss: 9.5313 - val_mae: 2.1998 - val_mse: 9.5313
Epoch 67/100
354/354 [==============================] - 0s 178us/sample - loss: 6.3353 - mae: 1.7095 - mse: 6.3353 - val_loss: 9.9436 - val_mae: 2.2965 - val_mse: 9.9436
Epoch 68/100
354/354 [==============================] - 0s 173us/sample - loss: 5.8545 - mae: 1.6760 - mse: 5.8545 - val_loss: 9.9311 - val_mae: 2.2837 - val_mse: 9.9311
Epoch 69/100
354/354 [==============================] - 0s 171us/sample - loss: 6.1148 - mae: 1.7286 - mse: 6.1148 - val_loss: 9.6456 - val_mae: 2.1932 - val_mse: 9.6456
Epoch 70/100
354/354 [==============================] - 0s 179us/sample - loss: 6.0462 - mae: 1.7194 - mse: 6.0462 - val_loss: 10.7485 - val_mae: 2.3224 - val_mse: 10.7485
Epoch 71/100
354/354 [==============================] - 0s 171us/sample - loss: 5.8132 - mae: 1.7049 - mse: 5.8132 - val_loss: 9.8704 - val_mae: 2.1916 - val_mse: 9.8704
Epoch 72/100
354/354 [==============================] - 0s 174us/sample - loss: 5.7957 - mae: 1.6492 - mse: 5.7957 - val_loss: 10.0593 - val_mae: 2.3159 - val_mse: 10.0593
Epoch 73/100
354/354 [==============================] - 0s 178us/sample - loss: 5.9002 - mae: 1.6952 - mse: 5.9002 - val_loss: 10.1425 - val_mae: 2.3594 - val_mse: 10.1425
Epoch 74/100
354/354 [==============================] - 0s 174us/sample - loss: 5.5721 - mae: 1.6277 - mse: 5.5721 - val_loss: 9.9564 - val_mae: 2.2284 - val_mse: 9.9564
Epoch 75/100
354/354 [==============================] - 0s 177us/sample - loss: 5.6730 - mae: 1.6669 - mse: 5.6730 - val_loss: 10.0358 - val_mae: 2.2259 - val_mse: 10.0358
Epoch 76/100
354/354 [==============================] - 0s 168us/sample - loss: 5.5947 - mae: 1.6216 - mse: 5.5947 - val_loss: 9.7815 - val_mae: 2.2282 - val_mse: 9.7815
Epoch 77/100
354/354 [==============================] - 0s 175us/sample - loss: 5.2870 - mae: 1.6492 - mse: 5.2870 - val_loss: 9.3813 - val_mae: 2.1987 - val_mse: 9.3813
Epoch 78/100
354/354 [==============================] - 0s 166us/sample - loss: 5.6015 - mae: 1.6183 - mse: 5.6015 - val_loss: 9.5577 - val_mae: 2.2139 - val_mse: 9.5577
Epoch 79/100
354/354 [==============================] - 0s 191us/sample - loss: 5.3793 - mae: 1.6202 - mse: 5.3793 - val_loss: 9.4099 - val_mae: 2.1957 - val_mse: 9.4099
Epoch 80/100
354/354 [==============================] - 0s 172us/sample - loss: 5.4258 - mae: 1.5943 - mse: 5.4258 - val_loss: 9.7489 - val_mae: 2.2233 - val_mse: 9.7489
Epoch 81/100
354/354 [==============================] - 0s 181us/sample - loss: 5.3006 - mae: 1.5934 - mse: 5.3006 - val_loss: 10.0298 - val_mae: 2.2258 - val_mse: 10.0298
Epoch 82/100
354/354 [==============================] - 0s 177us/sample - loss: 5.2590 - mae: 1.5854 - mse: 5.2590 - val_loss: 9.9642 - val_mae: 2.2718 - val_mse: 9.9642
Epoch 83/100
354/354 [==============================] - 0s 178us/sample - loss: 5.1325 - mae: 1.5765 - mse: 5.1325 - val_loss: 10.0795 - val_mae: 2.2524 - val_mse: 10.0795
Epoch 84/100
354/354 [==============================] - 0s 174us/sample - loss: 5.0736 - mae: 1.5846 - mse: 5.0736 - val_loss: 10.1607 - val_mae: 2.3146 - val_mse: 10.1607
Epoch 85/100
354/354 [==============================] - 0s 168us/sample - loss: 5.0863 - mae: 1.5598 - mse: 5.0863 - val_loss: 10.0663 - val_mae: 2.2961 - val_mse: 10.0663
Epoch 86/100
354/354 [==============================] - 0s 175us/sample - loss: 5.0422 - mae: 1.5758 - mse: 5.0422 - val_loss: 9.3842 - val_mae: 2.2033 - val_mse: 9.3842
Epoch 87/100
354/354 [==============================] - 0s 179us/sample - loss: 4.8308 - mae: 1.5587 - mse: 4.8308 - val_loss: 9.4605 - val_mae: 2.1797 - val_mse: 9.4605
Epoch 88/100
354/354 [==============================] - 0s 172us/sample - loss: 4.7424 - mae: 1.5468 - mse: 4.7424 - val_loss: 12.0587 - val_mae: 2.6306 - val_mse: 12.0587
Epoch 89/100
354/354 [==============================] - 0s 172us/sample - loss: 4.9329 - mae: 1.5937 - mse: 4.9329 - val_loss: 9.9514 - val_mae: 2.2366 - val_mse: 9.9514
Epoch 90/100
354/354 [==============================] - 0s 176us/sample - loss: 4.7181 - mae: 1.5625 - mse: 4.7181 - val_loss: 9.6245 - val_mae: 2.1626 - val_mse: 9.6245
Epoch 91/100
354/354 [==============================] - 0s 182us/sample - loss: 4.6726 - mae: 1.5040 - mse: 4.6726 - val_loss: 9.9543 - val_mae: 2.2394 - val_mse: 9.9543
Epoch 92/100
354/354 [==============================] - 0s 180us/sample - loss: 4.7058 - mae: 1.5416 - mse: 4.7058 - val_loss: 10.6368 - val_mae: 2.3900 - val_mse: 10.6368
Epoch 93/100
354/354 [==============================] - 0s 176us/sample - loss: 4.6515 - mae: 1.5235 - mse: 4.6515 - val_loss: 10.0118 - val_mae: 2.2661 - val_mse: 10.0118
Epoch 94/100
354/354 [==============================] - 0s 163us/sample - loss: 4.6973 - mae: 1.5262 - mse: 4.6973 - val_loss: 9.4214 - val_mae: 2.1961 - val_mse: 9.4214
Epoch 95/100
354/354 [==============================] - 0s 174us/sample - loss: 4.7056 - mae: 1.5392 - mse: 4.7056 - val_loss: 9.6110 - val_mae: 2.1998 - val_mse: 9.6110
Epoch 96/100
354/354 [==============================] - 0s 167us/sample - loss: 4.4156 - mae: 1.4496 - mse: 4.4156 - val_loss: 10.1083 - val_mae: 2.3143 - val_mse: 10.1083
Epoch 97/100
354/354 [==============================] - 0s 173us/sample - loss: 4.5201 - mae: 1.5019 - mse: 4.5201 - val_loss: 9.7179 - val_mae: 2.2635 - val_mse: 9.7179
Epoch 98/100
354/354 [==============================] - 0s 179us/sample - loss: 4.3824 - mae: 1.4403 - mse: 4.3824 - val_loss: 10.2802 - val_mae: 2.2846 - val_mse: 10.2802
Epoch 99/100
354/354 [==============================] - 0s 175us/sample - loss: 4.3252 - mae: 1.4806 - mse: 4.3252 - val_loss: 9.5943 - val_mae: 2.1745 - val_mse: 9.5943
Epoch 100/100
354/354 [==============================] - 0s 178us/sample - loss: 4.4134 - mae: 1.4451 - mse: 4.4134 - val_loss: 12.2396 - val_mae: 2.6152 - val_mse: 12.2396
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Output
x_test / loss : 12.2396
x_test / mae : 2.6152
x_test / mse : 12.2396
%% Cell type:markdown id: tags:
### 6.2 - Training history
What was the best result during our training ?
%% Cell type:code id: tags:
``` python
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
```
%% Output
min( val_mae ) : 2.1552
%% Cell type:code id: tags:
``` python
ooo.plot_history(history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']})
```
%% Output
%% Cell type:markdown id: tags:
## Step 7 - Restore a model :
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
loaded_model = tf.keras.models.load_model('./run/models/best_model.h5')
loaded_model.summary()
print("Loaded.")
```
%% Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________
Output (Dense) (None, 1) 65
=================================================================
Total params: 5,121
Trainable params: 5,121
Non-trainable params: 0
_________________________________________________________________
Loaded.
%% Cell type:markdown id: tags:
### 7.2 - Evaluate it :
%% Cell type:code id: tags:
``` python
score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Output
x_test / loss : 9.1795
x_test / mae : 2.1552
x_test / mse : 9.1795
%% Cell type:markdown id: tags:
### 7.3 - Make a prediction
%% Cell type:code id: tags:
``` python
mon_test=[-0.20113196, -0.48631663, 1.23572348, -0.26929877, 2.67879106,
-0.89623587, 1.09961251, -1.05826704, -0.55823117, -0.06159088,
-1.76085159, -1.97039608, 0.52775666]
mon_test=np.array(mon_test).reshape(1,13)
```
%% Cell type:code id: tags:
``` python
predictions = loaded_model.predict( mon_test )
print("Prédiction : {:.2f} K$ Reality : {:.2f} K$".format(predictions[0][0], y_train[13]))
```
%% Output
Prédiction : 16.51 K$ Reality : 20.20 K$
%% Cell type:markdown id: tags:
-----
That's all folks !
...@@ -19,6 +19,14 @@ You will find here : ...@@ -19,6 +19,14 @@ You will find here :
- sheets and practical information : - sheets and practical information :
- **[Configuration SSH](../-/wikis/howto-ssh)** - **[Configuration SSH](../-/wikis/howto-ssh)**
- [Regression with a Dense Network (DNN)](BHPD/01-DNN-Regression.ipynb)<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;A Simple regression with a Dense Neural Network (DNN) - BHPD dataset
- [Regression with a Dense Network (DNN) - Advanced code](BHPD/02-DNN-Regression-Premium.ipynb)<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;More advanced example of DNN network code - BHPD dataset
## Installation ## Installation
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment