Skip to content
Snippets Groups Projects
Commit d629ceee authored by Soraya Arias's avatar Soraya Arias
Browse files

Add details on BHPD colums

parent 5e293a3a
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img>
# <!-- TITLE --> [BHP2] - Regression with a Dense Network (DNN) - Advanced code
<!-- DESC --> More advanced example of DNN network code - BHPD dataset
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives :
- Predicts **housing prices** from a set of house features.
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The **[Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)** consists of price of houses in various places in Boston.
Alongside with price, the dataset also provide information such as Crime, areas of non-retail business in the town,
age of people who own the house and many other attributes...
Alongside with price, the dataset also provide these information :
- CRIM: This is the per capita crime rate by town
- ZN: This is the proportion of residential land zoned for lots larger than 25,000 sq.ft
- INDUS: This is the proportion of non-retail business acres per town
- CHAS: This is the Charles River dummy variable (this is equal to 1 if tract bounds river; 0 otherwise)
- NOX: This is the nitric oxides concentration (parts per 10 million)
- RM: This is the average number of rooms per dwelling
- AGE: This is the proportion of owner-occupied units built prior to 1940
- DIS: This is the weighted distances to five Boston employment centers
- RAD: This is the index of accessibility to radial highways
- TAX: This is the full-value property-tax rate per 10,000 dollars
- PTRATIO: This is the pupil-teacher ratio by town
- B: This is calculated as 1000(Bk — 0.63)^2, where Bk is the proportion of people of African American descent by town
- LSTAT: This is the percentage lower status of the population
- MEDV: This is the median value of owner-occupied homes in 1000 dollars
## What we're going to do :
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os,sys
from IPython.display import Markdown
from importlib import reload
sys.path.append('..')
import fidle.pwk as ooo
ooo.init()
os.makedirs('./run/models', mode=0o750, exist_ok=True)
```
%% Output
FIDLE 2020 - Practical Work Module
Version : 0.4.3
Run time : Friday 28 February 2020, 10:23:12
TensorFlow version : 2.0.0
Keras version : 2.2.4-tf
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
### 2.1 - Option 1 : From Keras
Boston housing is a famous historic dataset, so we can get it directly from [Keras datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets)
%% Cell type:raw id: tags:
(x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data(test_split=0.2, seed=113)
%% Cell type:markdown id: tags:
### 2.2 - Option 2 : From a csv file
More fun !
%% Cell type:code id: tags:
``` python
data = pd.read_csv('./data/BostonHousing.csv', header=0)
display(data.head(5).style.format("{0:.2f}"))
print('Données manquantes : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
```
%% Output
Données manquantes : 0 Shape is : (506, 14)
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
### 3.1 - Split data
We will use 80% of the data for training and 20% for validation.
x will be input data and y the expected output
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data_train = data.sample(frac=0.7, axis=0)
data_test = data.drop(data_train.index)
# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('medv', axis=1)
y_train = data_train['medv']
x_test = data_test.drop('medv', axis=1)
y_test = data_test['medv']
print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Output
Original data shape was : (506, 14)
x_train : (354, 13) y_train : (354,)
x_test : (152, 13) y_test : (152,)
%% Cell type:markdown id: tags:
### 3.2 - Data normalization
**Note :**
- All input data must be normalized, train and test.
- To do this we will subtract the mean and divide by the standard deviation.
- But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data.
%% Cell type:code id: tags:
``` python
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
```
%% Output
%% Cell type:markdown id: tags:
## Step 4 - Build a model
More informations about :
- [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers)
- [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
- [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
- [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)
%% Cell type:code id: tags:
``` python
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop',
loss = 'mse',
metrics = ['mae', 'mse'] )
return model
```
%% Cell type:markdown id: tags:
## 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
model=get_model_v1( (13,) )
model.summary()
img=keras.utils.plot_model( model, to_file='./run/model.png', show_shapes=True, show_layer_names=True, dpi=96)
display(img)
```
%% Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________
Output (Dense) (None, 1) 65
=================================================================
Total params: 5,121
Trainable params: 5,121
Non-trainable params: 0
_________________________________________________________________
%% Cell type:markdown id: tags:
### 5.2 - Add callback
%% Cell type:code id: tags:
``` python
os.makedirs('./run/models', mode=0o750, exist_ok=True)
save_dir = "./run/models/best_model.h5"
savemodel_callback = tf.keras.callbacks.ModelCheckpoint(filepath=save_dir, verbose=0, save_best_only=True)
```
%% Cell type:markdown id: tags:
### 5.3 - Train it
%% Cell type:code id: tags:
``` python
history = model.fit(x_train,
y_train,
epochs = 100,
batch_size = 10,
verbose = 1,
validation_data = (x_test, y_test),
callbacks = [savemodel_callback])
```
%% Output
Train on 354 samples, validate on 152 samples
Epoch 1/100
354/354 [==============================] - 1s 3ms/sample - loss: 452.0200 - mae: 19.1483 - mse: 452.0200 - val_loss: 295.4043 - val_mae: 15.2476 - val_mse: 295.4043
Epoch 2/100
354/354 [==============================] - 0s 318us/sample - loss: 203.8159 - mae: 11.9110 - mse: 203.8159 - val_loss: 102.2654 - val_mae: 8.2041 - val_mse: 102.2654
Epoch 3/100
354/354 [==============================] - 0s 322us/sample - loss: 82.2725 - mae: 6.6720 - mse: 82.2725 - val_loss: 48.9591 - val_mae: 5.3325 - val_mse: 48.9591
Epoch 4/100
354/354 [==============================] - 0s 330us/sample - loss: 47.3532 - mae: 4.9323 - mse: 47.3532 - val_loss: 30.3381 - val_mae: 4.0806 - val_mse: 30.3381
Epoch 5/100
354/354 [==============================] - 0s 332us/sample - loss: 34.2380 - mae: 4.1989 - mse: 34.2380 - val_loss: 25.3102 - val_mae: 3.6149 - val_mse: 25.3102
Epoch 6/100
354/354 [==============================] - 0s 330us/sample - loss: 27.7203 - mae: 3.7333 - mse: 27.7203 - val_loss: 24.1136 - val_mae: 3.4089 - val_mse: 24.1136
Epoch 7/100
354/354 [==============================] - 0s 336us/sample - loss: 23.4702 - mae: 3.4503 - mse: 23.4702 - val_loss: 21.9095 - val_mae: 3.1906 - val_mse: 21.9095
Epoch 8/100
354/354 [==============================] - 0s 322us/sample - loss: 19.8215 - mae: 3.1687 - mse: 19.8215 - val_loss: 21.9063 - val_mae: 3.2564 - val_mse: 21.9063
Epoch 9/100
354/354 [==============================] - 0s 326us/sample - loss: 17.6146 - mae: 2.9640 - mse: 17.6146 - val_loss: 19.1573 - val_mae: 2.9280 - val_mse: 19.1573
Epoch 10/100
354/354 [==============================] - 0s 288us/sample - loss: 15.9631 - mae: 2.8267 - mse: 15.9631 - val_loss: 19.1600 - val_mae: 2.8806 - val_mse: 19.1600
Epoch 11/100
354/354 [==============================] - 0s 326us/sample - loss: 14.4344 - mae: 2.6588 - mse: 14.4344 - val_loss: 18.0972 - val_mae: 2.7704 - val_mse: 18.0972
Epoch 12/100
354/354 [==============================] - 0s 330us/sample - loss: 13.3890 - mae: 2.5821 - mse: 13.3890 - val_loss: 18.0529 - val_mae: 2.7683 - val_mse: 18.0529
Epoch 13/100
354/354 [==============================] - 0s 326us/sample - loss: 12.7002 - mae: 2.5117 - mse: 12.7002 - val_loss: 17.7848 - val_mae: 2.6781 - val_mse: 17.7848
Epoch 14/100
354/354 [==============================] - 0s 279us/sample - loss: 11.8030 - mae: 2.4625 - mse: 11.8030 - val_loss: 18.4840 - val_mae: 2.7626 - val_mse: 18.4840
Epoch 15/100
354/354 [==============================] - 0s 331us/sample - loss: 11.4627 - mae: 2.3904 - mse: 11.4627 - val_loss: 17.1289 - val_mae: 2.6199 - val_mse: 17.1289
Epoch 16/100
354/354 [==============================] - 0s 284us/sample - loss: 11.1781 - mae: 2.3387 - mse: 11.1781 - val_loss: 17.9369 - val_mae: 2.6804 - val_mse: 17.9369
Epoch 17/100
354/354 [==============================] - 0s 326us/sample - loss: 10.7485 - mae: 2.3250 - mse: 10.7485 - val_loss: 16.6649 - val_mae: 2.5390 - val_mse: 16.6649
Epoch 18/100
354/354 [==============================] - 0s 294us/sample - loss: 10.5149 - mae: 2.2548 - mse: 10.5149 - val_loss: 18.1112 - val_mae: 2.6858 - val_mse: 18.1112
Epoch 19/100
354/354 [==============================] - 0s 295us/sample - loss: 10.4495 - mae: 2.2872 - mse: 10.4495 - val_loss: 17.9377 - val_mae: 2.6937 - val_mse: 17.9377
Epoch 20/100
354/354 [==============================] - 0s 277us/sample - loss: 10.2075 - mae: 2.2586 - mse: 10.2075 - val_loss: 17.5565 - val_mae: 2.5374 - val_mse: 17.5565
Epoch 21/100
354/354 [==============================] - 0s 325us/sample - loss: 10.0869 - mae: 2.2396 - mse: 10.0869 - val_loss: 16.2770 - val_mae: 2.4551 - val_mse: 16.2770
Epoch 22/100
354/354 [==============================] - 0s 278us/sample - loss: 9.5957 - mae: 2.1407 - mse: 9.5957 - val_loss: 17.8160 - val_mae: 2.6874 - val_mse: 17.8160
Epoch 23/100
354/354 [==============================] - 0s 304us/sample - loss: 9.7569 - mae: 2.2057 - mse: 9.7569 - val_loss: 15.5761 - val_mae: 2.4537 - val_mse: 15.5761
Epoch 24/100
354/354 [==============================] - 0s 286us/sample - loss: 9.4873 - mae: 2.1643 - mse: 9.4873 - val_loss: 16.8661 - val_mae: 2.5735 - val_mse: 16.8661
Epoch 25/100
354/354 [==============================] - 0s 287us/sample - loss: 9.0956 - mae: 2.1422 - mse: 9.0956 - val_loss: 16.5815 - val_mae: 2.4812 - val_mse: 16.5815
Epoch 26/100
354/354 [==============================] - 0s 288us/sample - loss: 9.3352 - mae: 2.1471 - mse: 9.3352 - val_loss: 16.0146 - val_mae: 2.4404 - val_mse: 16.0146
Epoch 27/100
354/354 [==============================] - 0s 286us/sample - loss: 8.6794 - mae: 2.0948 - mse: 8.6794 - val_loss: 18.2565 - val_mae: 2.7272 - val_mse: 18.2565
Epoch 28/100
354/354 [==============================] - 0s 294us/sample - loss: 8.9854 - mae: 2.1159 - mse: 8.9854 - val_loss: 16.4515 - val_mae: 2.5282 - val_mse: 16.4515
Epoch 29/100
354/354 [==============================] - 0s 286us/sample - loss: 8.8348 - mae: 2.1032 - mse: 8.8348 - val_loss: 17.2604 - val_mae: 2.5932 - val_mse: 17.2604
Epoch 30/100
354/354 [==============================] - 0s 244us/sample - loss: 8.7365 - mae: 2.0970 - mse: 8.7365 - val_loss: 16.1155 - val_mae: 2.4509 - val_mse: 16.1155
Epoch 31/100
354/354 [==============================] - 0s 282us/sample - loss: 8.6290 - mae: 2.0487 - mse: 8.6290 - val_loss: 16.9125 - val_mae: 2.5010 - val_mse: 16.9125
Epoch 32/100
354/354 [==============================] - 0s 280us/sample - loss: 8.6531 - mae: 2.0411 - mse: 8.6531 - val_loss: 15.7585 - val_mae: 2.4288 - val_mse: 15.7585
Epoch 33/100
354/354 [==============================] - 0s 330us/sample - loss: 8.6551 - mae: 2.0516 - mse: 8.6551 - val_loss: 15.4765 - val_mae: 2.4073 - val_mse: 15.4765
Epoch 34/100
354/354 [==============================] - 0s 292us/sample - loss: 8.4218 - mae: 2.0072 - mse: 8.4218 - val_loss: 16.2900 - val_mae: 2.5081 - val_mse: 16.2900
Epoch 35/100
354/354 [==============================] - 0s 293us/sample - loss: 8.3149 - mae: 1.9851 - mse: 8.3149 - val_loss: 15.7184 - val_mae: 2.4337 - val_mse: 15.7184
Epoch 36/100
354/354 [==============================] - 0s 289us/sample - loss: 8.4496 - mae: 2.0142 - mse: 8.4496 - val_loss: 16.2760 - val_mae: 2.5489 - val_mse: 16.2760
Epoch 37/100
354/354 [==============================] - 0s 334us/sample - loss: 8.0962 - mae: 1.9872 - mse: 8.0962 - val_loss: 15.3895 - val_mae: 2.3543 - val_mse: 15.3895
Epoch 38/100
354/354 [==============================] - 0s 298us/sample - loss: 8.1599 - mae: 1.9882 - mse: 8.1599 - val_loss: 16.0081 - val_mae: 2.3977 - val_mse: 16.0081
Epoch 39/100
354/354 [==============================] - 0s 286us/sample - loss: 7.9958 - mae: 1.9817 - mse: 7.9958 - val_loss: 15.8999 - val_mae: 2.4583 - val_mse: 15.8999
Epoch 40/100
354/354 [==============================] - 0s 282us/sample - loss: 7.8666 - mae: 1.9441 - mse: 7.8666 - val_loss: 16.8131 - val_mae: 2.6931 - val_mse: 16.8131
Epoch 41/100
354/354 [==============================] - 0s 293us/sample - loss: 7.9312 - mae: 1.9216 - mse: 7.9312 - val_loss: 15.4608 - val_mae: 2.3995 - val_mse: 15.4608
Epoch 42/100
354/354 [==============================] - 0s 286us/sample - loss: 7.6752 - mae: 1.9127 - mse: 7.6752 - val_loss: 15.8675 - val_mae: 2.5118 - val_mse: 15.8675
Epoch 43/100
354/354 [==============================] - 0s 332us/sample - loss: 7.7535 - mae: 1.9296 - mse: 7.7535 - val_loss: 15.2040 - val_mae: 2.3731 - val_mse: 15.2040
Epoch 44/100
354/354 [==============================] - 0s 331us/sample - loss: 7.6188 - mae: 1.9150 - mse: 7.6188 - val_loss: 15.0409 - val_mae: 2.3680 - val_mse: 15.0409
Epoch 45/100
354/354 [==============================] - 0s 284us/sample - loss: 7.6286 - mae: 1.8755 - mse: 7.6286 - val_loss: 15.1650 - val_mae: 2.3595 - val_mse: 15.1650
Epoch 46/100
354/354 [==============================] - 0s 278us/sample - loss: 7.7937 - mae: 1.9318 - mse: 7.7937 - val_loss: 15.7196 - val_mae: 2.4218 - val_mse: 15.7196
Epoch 47/100
354/354 [==============================] - 0s 275us/sample - loss: 7.4244 - mae: 1.9022 - mse: 7.4244 - val_loss: 15.5651 - val_mae: 2.4811 - val_mse: 15.5651
Epoch 48/100
354/354 [==============================] - 0s 330us/sample - loss: 7.4042 - mae: 1.9083 - mse: 7.4042 - val_loss: 14.7377 - val_mae: 2.3598 - val_mse: 14.7377
Epoch 49/100
354/354 [==============================] - 0s 270us/sample - loss: 7.3230 - mae: 1.8741 - mse: 7.3230 - val_loss: 15.2313 - val_mae: 2.4210 - val_mse: 15.2313
Epoch 50/100
354/354 [==============================] - 0s 276us/sample - loss: 7.3075 - mae: 1.8614 - mse: 7.3075 - val_loss: 14.7584 - val_mae: 2.3305 - val_mse: 14.7584
Epoch 51/100
354/354 [==============================] - 0s 270us/sample - loss: 7.4376 - mae: 1.8956 - mse: 7.4376 - val_loss: 15.3226 - val_mae: 2.3742 - val_mse: 15.3226
Epoch 52/100
354/354 [==============================] - 0s 287us/sample - loss: 7.1467 - mae: 1.8380 - mse: 7.1467 - val_loss: 15.5150 - val_mae: 2.4291 - val_mse: 15.5150
Epoch 53/100
354/354 [==============================] - 0s 271us/sample - loss: 6.9376 - mae: 1.8018 - mse: 6.9376 - val_loss: 16.2807 - val_mae: 2.5440 - val_mse: 16.2807
Epoch 54/100
354/354 [==============================] - 0s 289us/sample - loss: 7.0885 - mae: 1.8170 - mse: 7.0885 - val_loss: 15.2975 - val_mae: 2.4258 - val_mse: 15.2975
Epoch 55/100
354/354 [==============================] - 0s 284us/sample - loss: 7.0596 - mae: 1.8107 - mse: 7.0596 - val_loss: 15.7460 - val_mae: 2.4825 - val_mse: 15.7460
Epoch 56/100
354/354 [==============================] - 0s 288us/sample - loss: 6.7812 - mae: 1.8335 - mse: 6.7812 - val_loss: 14.7849 - val_mae: 2.3684 - val_mse: 14.7849
Epoch 57/100
354/354 [==============================] - 0s 290us/sample - loss: 6.9172 - mae: 1.8140 - mse: 6.9172 - val_loss: 15.1139 - val_mae: 2.4500 - val_mse: 15.1139
Epoch 58/100
354/354 [==============================] - 0s 284us/sample - loss: 6.8010 - mae: 1.7637 - mse: 6.8010 - val_loss: 16.7211 - val_mae: 2.5769 - val_mse: 16.7211
Epoch 59/100
354/354 [==============================] - 0s 271us/sample - loss: 6.8954 - mae: 1.8046 - mse: 6.8954 - val_loss: 15.1101 - val_mae: 2.4743 - val_mse: 15.1101
Epoch 60/100
354/354 [==============================] - 0s 280us/sample - loss: 6.7740 - mae: 1.7866 - mse: 6.7740 - val_loss: 15.0811 - val_mae: 2.3838 - val_mse: 15.0811
Epoch 61/100
354/354 [==============================] - 0s 323us/sample - loss: 6.8996 - mae: 1.7872 - mse: 6.8996 - val_loss: 14.4203 - val_mae: 2.3199 - val_mse: 14.4203
Epoch 62/100
354/354 [==============================] - 0s 277us/sample - loss: 6.7188 - mae: 1.7858 - mse: 6.7188 - val_loss: 14.5972 - val_mae: 2.3610 - val_mse: 14.5972
Epoch 63/100
354/354 [==============================] - 0s 266us/sample - loss: 6.5708 - mae: 1.7710 - mse: 6.5708 - val_loss: 14.5145 - val_mae: 2.3563 - val_mse: 14.5145
Epoch 64/100
354/354 [==============================] - 0s 273us/sample - loss: 6.1133 - mae: 1.6706 - mse: 6.1133 - val_loss: 14.9870 - val_mae: 2.4017 - val_mse: 14.9870
Epoch 65/100
354/354 [==============================] - 0s 313us/sample - loss: 6.4980 - mae: 1.7295 - mse: 6.4980 - val_loss: 14.0636 - val_mae: 2.3661 - val_mse: 14.0636
Epoch 66/100
354/354 [==============================] - 0s 262us/sample - loss: 6.5237 - mae: 1.7277 - mse: 6.5237 - val_loss: 14.2366 - val_mae: 2.3318 - val_mse: 14.2366
Epoch 67/100
354/354 [==============================] - 0s 326us/sample - loss: 6.3067 - mae: 1.7433 - mse: 6.3067 - val_loss: 14.0032 - val_mae: 2.3350 - val_mse: 14.0032
Epoch 68/100
354/354 [==============================] - 0s 286us/sample - loss: 6.4447 - mae: 1.7336 - mse: 6.4447 - val_loss: 14.4271 - val_mae: 2.3149 - val_mse: 14.4271
Epoch 69/100
354/354 [==============================] - 0s 332us/sample - loss: 6.3821 - mae: 1.7012 - mse: 6.3821 - val_loss: 13.9716 - val_mae: 2.3141 - val_mse: 13.9716
Epoch 70/100
354/354 [==============================] - 0s 251us/sample - loss: 6.3734 - mae: 1.7080 - mse: 6.3734 - val_loss: 14.9184 - val_mae: 2.4716 - val_mse: 14.9184
Epoch 71/100
354/354 [==============================] - 0s 321us/sample - loss: 6.4273 - mae: 1.7281 - mse: 6.4273 - val_loss: 13.8686 - val_mae: 2.3176 - val_mse: 13.8686
Epoch 72/100
354/354 [==============================] - 0s 285us/sample - loss: 6.2473 - mae: 1.6967 - mse: 6.2473 - val_loss: 14.2249 - val_mae: 2.3450 - val_mse: 14.2249
Epoch 73/100
354/354 [==============================] - 0s 286us/sample - loss: 6.3427 - mae: 1.7034 - mse: 6.3427 - val_loss: 14.3159 - val_mae: 2.3431 - val_mse: 14.3159
Epoch 74/100
354/354 [==============================] - 0s 287us/sample - loss: 6.0929 - mae: 1.6752 - mse: 6.0929 - val_loss: 14.2151 - val_mae: 2.3644 - val_mse: 14.2151
Epoch 75/100
354/354 [==============================] - 0s 289us/sample - loss: 6.1445 - mae: 1.6985 - mse: 6.1445 - val_loss: 14.8251 - val_mae: 2.4202 - val_mse: 14.8251
Epoch 76/100
354/354 [==============================] - 0s 311us/sample - loss: 6.2184 - mae: 1.6867 - mse: 6.2184 - val_loss: 14.0596 - val_mae: 2.3274 - val_mse: 14.0596
Epoch 77/100
354/354 [==============================] - 0s 340us/sample - loss: 6.1201 - mae: 1.6785 - mse: 6.1201 - val_loss: 13.4886 - val_mae: 2.2769 - val_mse: 13.4886
Epoch 78/100
354/354 [==============================] - 0s 286us/sample - loss: 5.9001 - mae: 1.6716 - mse: 5.9001 - val_loss: 14.0295 - val_mae: 2.3214 - val_mse: 14.0295
Epoch 79/100
354/354 [==============================] - 0s 284us/sample - loss: 6.0389 - mae: 1.6783 - mse: 6.0389 - val_loss: 14.0250 - val_mae: 2.3245 - val_mse: 14.0250
Epoch 80/100
354/354 [==============================] - 0s 288us/sample - loss: 5.8268 - mae: 1.6458 - mse: 5.8268 - val_loss: 15.2746 - val_mae: 2.4834 - val_mse: 15.2746
Epoch 81/100
354/354 [==============================] - 0s 284us/sample - loss: 5.8671 - mae: 1.6680 - mse: 5.8671 - val_loss: 14.4935 - val_mae: 2.4353 - val_mse: 14.4935
Epoch 82/100
354/354 [==============================] - 0s 274us/sample - loss: 5.8115 - mae: 1.6742 - mse: 5.8115 - val_loss: 14.6922 - val_mae: 2.3631 - val_mse: 14.6922
Epoch 83/100
354/354 [==============================] - 0s 331us/sample - loss: 5.8561 - mae: 1.6727 - mse: 5.8561 - val_loss: 13.4236 - val_mae: 2.2982 - val_mse: 13.4236
Epoch 84/100
354/354 [==============================] - 0s 290us/sample - loss: 5.7500 - mae: 1.5833 - mse: 5.7500 - val_loss: 14.4867 - val_mae: 2.4330 - val_mse: 14.4867
Epoch 85/100
354/354 [==============================] - 0s 286us/sample - loss: 5.6700 - mae: 1.6435 - mse: 5.6700 - val_loss: 13.9873 - val_mae: 2.3614 - val_mse: 13.9873
Epoch 86/100
354/354 [==============================] - 0s 268us/sample - loss: 5.6816 - mae: 1.6524 - mse: 5.6816 - val_loss: 13.4864 - val_mae: 2.3505 - val_mse: 13.4864
Epoch 87/100
354/354 [==============================] - 0s 267us/sample - loss: 5.5838 - mae: 1.6220 - mse: 5.5838 - val_loss: 15.4727 - val_mae: 2.5215 - val_mse: 15.4727
Epoch 88/100
354/354 [==============================] - 0s 284us/sample - loss: 5.6117 - mae: 1.6208 - mse: 5.6117 - val_loss: 13.6392 - val_mae: 2.3150 - val_mse: 13.6392
Epoch 89/100
354/354 [==============================] - 0s 324us/sample - loss: 5.5648 - mae: 1.6051 - mse: 5.5648 - val_loss: 13.2082 - val_mae: 2.2858 - val_mse: 13.2082
Epoch 90/100
354/354 [==============================] - 0s 288us/sample - loss: 5.6019 - mae: 1.5946 - mse: 5.6019 - val_loss: 13.7882 - val_mae: 2.3677 - val_mse: 13.7882
Epoch 91/100
354/354 [==============================] - 0s 336us/sample - loss: 5.4979 - mae: 1.6000 - mse: 5.4979 - val_loss: 12.9619 - val_mae: 2.2898 - val_mse: 12.9619
Epoch 92/100
354/354 [==============================] - 0s 294us/sample - loss: 5.4595 - mae: 1.5815 - mse: 5.4595 - val_loss: 13.8617 - val_mae: 2.3735 - val_mse: 13.8617
Epoch 93/100
354/354 [==============================] - 0s 321us/sample - loss: 5.1999 - mae: 1.6043 - mse: 5.1999 - val_loss: 12.9011 - val_mae: 2.3053 - val_mse: 12.9011
Epoch 94/100
354/354 [==============================] - 0s 268us/sample - loss: 5.2630 - mae: 1.5463 - mse: 5.2630 - val_loss: 14.3610 - val_mae: 2.4348 - val_mse: 14.3610
Epoch 95/100
354/354 [==============================] - 0s 300us/sample - loss: 5.3272 - mae: 1.6047 - mse: 5.3272 - val_loss: 12.9650 - val_mae: 2.2989 - val_mse: 12.9650
Epoch 96/100
354/354 [==============================] - 0s 283us/sample - loss: 5.3137 - mae: 1.5796 - mse: 5.3137 - val_loss: 13.9091 - val_mae: 2.3669 - val_mse: 13.9091
Epoch 97/100
354/354 [==============================] - 0s 280us/sample - loss: 5.2891 - mae: 1.5773 - mse: 5.2891 - val_loss: 13.2578 - val_mae: 2.3012 - val_mse: 13.2578
Epoch 98/100
354/354 [==============================] - 0s 264us/sample - loss: 5.3977 - mae: 1.5920 - mse: 5.3977 - val_loss: 13.8690 - val_mae: 2.4075 - val_mse: 13.8690
Epoch 99/100
354/354 [==============================] - 0s 273us/sample - loss: 5.3071 - mae: 1.5391 - mse: 5.3071 - val_loss: 12.9043 - val_mae: 2.2816 - val_mse: 12.9043
Epoch 100/100
354/354 [==============================] - 0s 318us/sample - loss: 5.2748 - mae: 1.5458 - mse: 5.2748 - val_loss: 12.6915 - val_mae: 2.2683 - val_mse: 12.6915
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Output
x_test / loss : 12.6915
x_test / mae : 2.2683
x_test / mse : 12.6915
%% Cell type:markdown id: tags:
### 6.2 - Training history
What was the best result during our training ?
%% Cell type:code id: tags:
``` python
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
```
%% Output
min( val_mae ) : 2.2683
%% Cell type:code id: tags:
``` python
ooo.plot_history(history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']})
```
%% Output
%% Cell type:markdown id: tags:
## Step 7 - Restore a model :
%% Cell type:markdown id: tags:
### 7.1 - Reload model
%% Cell type:code id: tags:
``` python
loaded_model = tf.keras.models.load_model('./run/models/best_model.h5')
loaded_model.summary()
print("Loaded.")
```
%% Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________
Output (Dense) (None, 1) 65
=================================================================
Total params: 5,121
Trainable params: 5,121
Non-trainable params: 0
_________________________________________________________________
Loaded.
%% Cell type:markdown id: tags:
### 7.2 - Evaluate it :
%% Cell type:code id: tags:
``` python
score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Output
x_test / loss : 12.6915
x_test / mae : 2.2683
x_test / mse : 12.6915
%% Cell type:markdown id: tags:
### 7.3 - Make a prediction
%% Cell type:code id: tags:
``` python
my_data = [ 1.26425925, -0.48522739, 1.0436489 , -0.23112788, 1.37120745,
-2.14308942, 1.13489104, -1.06802005, 1.71189006, 1.57042287,
0.77859951, 0.14769795, 2.7585581 ]
real_price = 10.4
my_data=np.array(my_data).reshape(1,13)
```
%% Cell type:code id: tags:
``` python
predictions = loaded_model.predict( my_data )
print("Prédiction : {:.2f} K$ Reality : {:.2f} K$".format(predictions[0][0], real_price))
```
%% Output
Prédiction : 10.75 K$ Reality : 10.40 K$
%% Cell type:markdown id: tags:
---
<img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment