Skip to content
Snippets Groups Projects
Commit d0f35a35 authored by Soraya Arias's avatar Soraya Arias
Browse files

Translate in EN French lines

Former-commit-id: b7c6033c
parent aa6460cb
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
<img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img> <img width="800px" src="../fidle/img/00-Fidle-header-01.svg"></img>
# <!-- TITLE --> [BHP2] - Regression with a Dense Network (DNN) - Advanced code # <!-- TITLE --> [BHP2] - Regression with a Dense Network (DNN) - Advanced code
<!-- DESC --> More advanced example of DNN network code - BHPD dataset <!-- DESC --> More advanced example of DNN network code - BHPD dataset
<!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) --> <!-- AUTHOR : Jean-Luc Parouty (CNRS/SIMaP) -->
## Objectives : ## Objectives :
- Predicts **housing prices** from a set of house features. - Predicts **housing prices** from a set of house features.
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model. - Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The **[Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)** consists of price of houses in various places in Boston. The **[Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)** consists of price of houses in various places in Boston.
Alongside with price, the dataset also provide these information : Alongside with price, the dataset also provide these information :
- CRIM: This is the per capita crime rate by town - CRIM: This is the per capita crime rate by town
- ZN: This is the proportion of residential land zoned for lots larger than 25,000 sq.ft - ZN: This is the proportion of residential land zoned for lots larger than 25,000 sq.ft
- INDUS: This is the proportion of non-retail business acres per town - INDUS: This is the proportion of non-retail business acres per town
- CHAS: This is the Charles River dummy variable (this is equal to 1 if tract bounds river; 0 otherwise) - CHAS: This is the Charles River dummy variable (this is equal to 1 if tract bounds river; 0 otherwise)
- NOX: This is the nitric oxides concentration (parts per 10 million) - NOX: This is the nitric oxides concentration (parts per 10 million)
- RM: This is the average number of rooms per dwelling - RM: This is the average number of rooms per dwelling
- AGE: This is the proportion of owner-occupied units built prior to 1940 - AGE: This is the proportion of owner-occupied units built prior to 1940
- DIS: This is the weighted distances to five Boston employment centers - DIS: This is the weighted distances to five Boston employment centers
- RAD: This is the index of accessibility to radial highways - RAD: This is the index of accessibility to radial highways
- TAX: This is the full-value property-tax rate per 10,000 dollars - TAX: This is the full-value property-tax rate per 10,000 dollars
- PTRATIO: This is the pupil-teacher ratio by town - PTRATIO: This is the pupil-teacher ratio by town
- B: This is calculated as 1000(Bk — 0.63)^2, where Bk is the proportion of people of African American descent by town - B: This is calculated as 1000(Bk — 0.63)^2, where Bk is the proportion of people of African American descent by town
- LSTAT: This is the percentage lower status of the population - LSTAT: This is the percentage lower status of the population
- MEDV: This is the median value of owner-occupied homes in 1000 dollars - MEDV: This is the median value of owner-occupied homes in 1000 dollars
## What we're going to do : ## What we're going to do :
- (Retrieve data) - (Retrieve data)
- (Preparing the data) - (Preparing the data)
- (Build a model) - (Build a model)
- Train and save the model - Train and save the model
- Restore saved model - Restore saved model
- Evaluate the model - Evaluate the model
- Make some predictions - Make some predictions
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 1 - Import and init ## Step 1 - Import and init
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import tensorflow as tf import tensorflow as tf
from tensorflow import keras from tensorflow import keras
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import pandas as pd import pandas as pd
import os,sys import os,sys
from IPython.display import Markdown from IPython.display import Markdown
from importlib import reload from importlib import reload
sys.path.append('..') sys.path.append('..')
import fidle.pwk as ooo import fidle.pwk as ooo
ooo.init() ooo.init()
os.makedirs('./run/models', mode=0o750, exist_ok=True) os.makedirs('./run/models', mode=0o750, exist_ok=True)
``` ```
%% Output %% Output
FIDLE 2020 - Practical Work Module FIDLE 2020 - Practical Work Module
Version : 0.4.3 Version : 0.4.3
Run time : Friday 28 February 2020, 10:23:12 Run time : Friday 28 February 2020, 10:23:12
TensorFlow version : 2.0.0 TensorFlow version : 2.0.0
Keras version : 2.2.4-tf Keras version : 2.2.4-tf
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 2 - Retrieve data ## Step 2 - Retrieve data
### 2.1 - Option 1 : From Keras ### 2.1 - Option 1 : From Keras
Boston housing is a famous historic dataset, so we can get it directly from [Keras datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets) Boston housing is a famous historic dataset, so we can get it directly from [Keras datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets)
%% Cell type:raw id: tags: %% Cell type:raw id: tags:
(x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data(test_split=0.2, seed=113) (x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data(test_split=0.2, seed=113)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 2.2 - Option 2 : From a csv file ### 2.2 - Option 2 : From a csv file
More fun ! More fun !
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
data = pd.read_csv('./data/BostonHousing.csv', header=0) data = pd.read_csv('./data/BostonHousing.csv', header=0)
display(data.head(5).style.format("{0:.2f}")) display(data.head(5).style.format("{0:.2f}"))
print('Données manquantes : ',data.isna().sum().sum(), ' Shape is : ', data.shape) print('Missing Data : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
``` ```
%% Output %% Output
Données manquantes : 0 Shape is : (506, 14) Missing Data : 0 Shape is : (506, 14)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 3 - Preparing the data ## Step 3 - Preparing the data
### 3.1 - Split data ### 3.1 - Split data
We will use 80% of the data for training and 20% for validation. We will use 80% of the data for training and 20% for validation.
x will be input data and y the expected output x will be input data and y the expected output
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# ---- Split => train, test # ---- Split => train, test
# #
data_train = data.sample(frac=0.7, axis=0) data_train = data.sample(frac=0.7, axis=0)
data_test = data.drop(data_train.index) data_test = data.drop(data_train.index)
# ---- Split => x,y (medv is price) # ---- Split => x,y (medv is price)
# #
x_train = data_train.drop('medv', axis=1) x_train = data_train.drop('medv', axis=1)
y_train = data_train['medv'] y_train = data_train['medv']
x_test = data_test.drop('medv', axis=1) x_test = data_test.drop('medv', axis=1)
y_test = data_test['medv'] y_test = data_test['medv']
print('Original data shape was : ',data.shape) print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape) print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape) print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
``` ```
%% Output %% Output
Original data shape was : (506, 14) Original data shape was : (506, 14)
x_train : (354, 13) y_train : (354,) x_train : (354, 13) y_train : (354,)
x_test : (152, 13) y_test : (152,) x_test : (152, 13) y_test : (152,)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 3.2 - Data normalization ### 3.2 - Data normalization
**Note :** **Note :**
- All input data must be normalized, train and test. - All input data must be normalized, train and test.
- To do this we will subtract the mean and divide by the standard deviation. - To do this we will subtract the mean and divide by the standard deviation.
- But test data should not be used in any way, even for normalization. - But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data. - The mean and the standard deviation will therefore only be calculated with the train data.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :")) display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean() mean = x_train.mean()
std = x_train.std() std = x_train.std()
x_train = (x_train - mean) / std x_train = (x_train - mean) / std
x_test = (x_test - mean) / std x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :")) display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
x_train, y_train = np.array(x_train), np.array(y_train) x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test) x_test, y_test = np.array(x_test), np.array(y_test)
``` ```
%% Output %% Output
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 4 - Build a model ## Step 4 - Build a model
More informations about : More informations about :
- [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers) - [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers)
- [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations) - [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
- [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses) - [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
- [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics) - [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def get_model_v1(shape): def get_model_v1(shape):
model = keras.models.Sequential() model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer")) model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1')) model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2')) model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output')) model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop', model.compile(optimizer = 'rmsprop',
loss = 'mse', loss = 'mse',
metrics = ['mae', 'mse'] ) metrics = ['mae', 'mse'] )
return model return model
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## 5 - Train the model ## 5 - Train the model
### 5.1 - Get it ### 5.1 - Get it
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
model=get_model_v1( (13,) ) model=get_model_v1( (13,) )
model.summary() model.summary()
img=keras.utils.plot_model( model, to_file='./run/model.png', show_shapes=True, show_layer_names=True, dpi=96) img=keras.utils.plot_model( model, to_file='./run/model.png', show_shapes=True, show_layer_names=True, dpi=96)
display(img) display(img)
``` ```
%% Output %% Output
Model: "sequential" Model: "sequential"
_________________________________________________________________ _________________________________________________________________
Layer (type) Output Shape Param # Layer (type) Output Shape Param #
================================================================= =================================================================
Dense_n1 (Dense) (None, 64) 896 Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________ _________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160 Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________ _________________________________________________________________
Output (Dense) (None, 1) 65 Output (Dense) (None, 1) 65
================================================================= =================================================================
Total params: 5,121 Total params: 5,121
Trainable params: 5,121 Trainable params: 5,121
Non-trainable params: 0 Non-trainable params: 0
_________________________________________________________________ _________________________________________________________________
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 5.2 - Add callback ### 5.2 - Add callback
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
os.makedirs('./run/models', mode=0o750, exist_ok=True) os.makedirs('./run/models', mode=0o750, exist_ok=True)
save_dir = "./run/models/best_model.h5" save_dir = "./run/models/best_model.h5"
savemodel_callback = tf.keras.callbacks.ModelCheckpoint(filepath=save_dir, verbose=0, save_best_only=True) savemodel_callback = tf.keras.callbacks.ModelCheckpoint(filepath=save_dir, verbose=0, save_best_only=True)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 5.3 - Train it ### 5.3 - Train it
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
history = model.fit(x_train, history = model.fit(x_train,
y_train, y_train,
epochs = 100, epochs = 100,
batch_size = 10, batch_size = 10,
verbose = 1, verbose = 1,
validation_data = (x_test, y_test), validation_data = (x_test, y_test),
callbacks = [savemodel_callback]) callbacks = [savemodel_callback])
``` ```
%% Output %% Output
Train on 354 samples, validate on 152 samples Train on 354 samples, validate on 152 samples
Epoch 1/100 Epoch 1/100
354/354 [==============================] - 1s 3ms/sample - loss: 452.0200 - mae: 19.1483 - mse: 452.0200 - val_loss: 295.4043 - val_mae: 15.2476 - val_mse: 295.4043 354/354 [==============================] - 1s 3ms/sample - loss: 452.0200 - mae: 19.1483 - mse: 452.0200 - val_loss: 295.4043 - val_mae: 15.2476 - val_mse: 295.4043
Epoch 2/100 Epoch 2/100
354/354 [==============================] - 0s 318us/sample - loss: 203.8159 - mae: 11.9110 - mse: 203.8159 - val_loss: 102.2654 - val_mae: 8.2041 - val_mse: 102.2654 354/354 [==============================] - 0s 318us/sample - loss: 203.8159 - mae: 11.9110 - mse: 203.8159 - val_loss: 102.2654 - val_mae: 8.2041 - val_mse: 102.2654
Epoch 3/100 Epoch 3/100
354/354 [==============================] - 0s 322us/sample - loss: 82.2725 - mae: 6.6720 - mse: 82.2725 - val_loss: 48.9591 - val_mae: 5.3325 - val_mse: 48.9591 354/354 [==============================] - 0s 322us/sample - loss: 82.2725 - mae: 6.6720 - mse: 82.2725 - val_loss: 48.9591 - val_mae: 5.3325 - val_mse: 48.9591
Epoch 4/100 Epoch 4/100
354/354 [==============================] - 0s 330us/sample - loss: 47.3532 - mae: 4.9323 - mse: 47.3532 - val_loss: 30.3381 - val_mae: 4.0806 - val_mse: 30.3381 354/354 [==============================] - 0s 330us/sample - loss: 47.3532 - mae: 4.9323 - mse: 47.3532 - val_loss: 30.3381 - val_mae: 4.0806 - val_mse: 30.3381
Epoch 5/100 Epoch 5/100
354/354 [==============================] - 0s 332us/sample - loss: 34.2380 - mae: 4.1989 - mse: 34.2380 - val_loss: 25.3102 - val_mae: 3.6149 - val_mse: 25.3102 354/354 [==============================] - 0s 332us/sample - loss: 34.2380 - mae: 4.1989 - mse: 34.2380 - val_loss: 25.3102 - val_mae: 3.6149 - val_mse: 25.3102
Epoch 6/100 Epoch 6/100
354/354 [==============================] - 0s 330us/sample - loss: 27.7203 - mae: 3.7333 - mse: 27.7203 - val_loss: 24.1136 - val_mae: 3.4089 - val_mse: 24.1136 354/354 [==============================] - 0s 330us/sample - loss: 27.7203 - mae: 3.7333 - mse: 27.7203 - val_loss: 24.1136 - val_mae: 3.4089 - val_mse: 24.1136
Epoch 7/100 Epoch 7/100
354/354 [==============================] - 0s 336us/sample - loss: 23.4702 - mae: 3.4503 - mse: 23.4702 - val_loss: 21.9095 - val_mae: 3.1906 - val_mse: 21.9095 354/354 [==============================] - 0s 336us/sample - loss: 23.4702 - mae: 3.4503 - mse: 23.4702 - val_loss: 21.9095 - val_mae: 3.1906 - val_mse: 21.9095
Epoch 8/100 Epoch 8/100
354/354 [==============================] - 0s 322us/sample - loss: 19.8215 - mae: 3.1687 - mse: 19.8215 - val_loss: 21.9063 - val_mae: 3.2564 - val_mse: 21.9063 354/354 [==============================] - 0s 322us/sample - loss: 19.8215 - mae: 3.1687 - mse: 19.8215 - val_loss: 21.9063 - val_mae: 3.2564 - val_mse: 21.9063
Epoch 9/100 Epoch 9/100
354/354 [==============================] - 0s 326us/sample - loss: 17.6146 - mae: 2.9640 - mse: 17.6146 - val_loss: 19.1573 - val_mae: 2.9280 - val_mse: 19.1573 354/354 [==============================] - 0s 326us/sample - loss: 17.6146 - mae: 2.9640 - mse: 17.6146 - val_loss: 19.1573 - val_mae: 2.9280 - val_mse: 19.1573
Epoch 10/100 Epoch 10/100
354/354 [==============================] - 0s 288us/sample - loss: 15.9631 - mae: 2.8267 - mse: 15.9631 - val_loss: 19.1600 - val_mae: 2.8806 - val_mse: 19.1600 354/354 [==============================] - 0s 288us/sample - loss: 15.9631 - mae: 2.8267 - mse: 15.9631 - val_loss: 19.1600 - val_mae: 2.8806 - val_mse: 19.1600
Epoch 11/100 Epoch 11/100
354/354 [==============================] - 0s 326us/sample - loss: 14.4344 - mae: 2.6588 - mse: 14.4344 - val_loss: 18.0972 - val_mae: 2.7704 - val_mse: 18.0972 354/354 [==============================] - 0s 326us/sample - loss: 14.4344 - mae: 2.6588 - mse: 14.4344 - val_loss: 18.0972 - val_mae: 2.7704 - val_mse: 18.0972
Epoch 12/100 Epoch 12/100
354/354 [==============================] - 0s 330us/sample - loss: 13.3890 - mae: 2.5821 - mse: 13.3890 - val_loss: 18.0529 - val_mae: 2.7683 - val_mse: 18.0529 354/354 [==============================] - 0s 330us/sample - loss: 13.3890 - mae: 2.5821 - mse: 13.3890 - val_loss: 18.0529 - val_mae: 2.7683 - val_mse: 18.0529
Epoch 13/100 Epoch 13/100
354/354 [==============================] - 0s 326us/sample - loss: 12.7002 - mae: 2.5117 - mse: 12.7002 - val_loss: 17.7848 - val_mae: 2.6781 - val_mse: 17.7848 354/354 [==============================] - 0s 326us/sample - loss: 12.7002 - mae: 2.5117 - mse: 12.7002 - val_loss: 17.7848 - val_mae: 2.6781 - val_mse: 17.7848
Epoch 14/100 Epoch 14/100
354/354 [==============================] - 0s 279us/sample - loss: 11.8030 - mae: 2.4625 - mse: 11.8030 - val_loss: 18.4840 - val_mae: 2.7626 - val_mse: 18.4840 354/354 [==============================] - 0s 279us/sample - loss: 11.8030 - mae: 2.4625 - mse: 11.8030 - val_loss: 18.4840 - val_mae: 2.7626 - val_mse: 18.4840
Epoch 15/100 Epoch 15/100
354/354 [==============================] - 0s 331us/sample - loss: 11.4627 - mae: 2.3904 - mse: 11.4627 - val_loss: 17.1289 - val_mae: 2.6199 - val_mse: 17.1289 354/354 [==============================] - 0s 331us/sample - loss: 11.4627 - mae: 2.3904 - mse: 11.4627 - val_loss: 17.1289 - val_mae: 2.6199 - val_mse: 17.1289
Epoch 16/100 Epoch 16/100
354/354 [==============================] - 0s 284us/sample - loss: 11.1781 - mae: 2.3387 - mse: 11.1781 - val_loss: 17.9369 - val_mae: 2.6804 - val_mse: 17.9369 354/354 [==============================] - 0s 284us/sample - loss: 11.1781 - mae: 2.3387 - mse: 11.1781 - val_loss: 17.9369 - val_mae: 2.6804 - val_mse: 17.9369
Epoch 17/100 Epoch 17/100
354/354 [==============================] - 0s 326us/sample - loss: 10.7485 - mae: 2.3250 - mse: 10.7485 - val_loss: 16.6649 - val_mae: 2.5390 - val_mse: 16.6649 354/354 [==============================] - 0s 326us/sample - loss: 10.7485 - mae: 2.3250 - mse: 10.7485 - val_loss: 16.6649 - val_mae: 2.5390 - val_mse: 16.6649
Epoch 18/100 Epoch 18/100
354/354 [==============================] - 0s 294us/sample - loss: 10.5149 - mae: 2.2548 - mse: 10.5149 - val_loss: 18.1112 - val_mae: 2.6858 - val_mse: 18.1112 354/354 [==============================] - 0s 294us/sample - loss: 10.5149 - mae: 2.2548 - mse: 10.5149 - val_loss: 18.1112 - val_mae: 2.6858 - val_mse: 18.1112
Epoch 19/100 Epoch 19/100
354/354 [==============================] - 0s 295us/sample - loss: 10.4495 - mae: 2.2872 - mse: 10.4495 - val_loss: 17.9377 - val_mae: 2.6937 - val_mse: 17.9377 354/354 [==============================] - 0s 295us/sample - loss: 10.4495 - mae: 2.2872 - mse: 10.4495 - val_loss: 17.9377 - val_mae: 2.6937 - val_mse: 17.9377
Epoch 20/100 Epoch 20/100
354/354 [==============================] - 0s 277us/sample - loss: 10.2075 - mae: 2.2586 - mse: 10.2075 - val_loss: 17.5565 - val_mae: 2.5374 - val_mse: 17.5565 354/354 [==============================] - 0s 277us/sample - loss: 10.2075 - mae: 2.2586 - mse: 10.2075 - val_loss: 17.5565 - val_mae: 2.5374 - val_mse: 17.5565
Epoch 21/100 Epoch 21/100
354/354 [==============================] - 0s 325us/sample - loss: 10.0869 - mae: 2.2396 - mse: 10.0869 - val_loss: 16.2770 - val_mae: 2.4551 - val_mse: 16.2770 354/354 [==============================] - 0s 325us/sample - loss: 10.0869 - mae: 2.2396 - mse: 10.0869 - val_loss: 16.2770 - val_mae: 2.4551 - val_mse: 16.2770
Epoch 22/100 Epoch 22/100
354/354 [==============================] - 0s 278us/sample - loss: 9.5957 - mae: 2.1407 - mse: 9.5957 - val_loss: 17.8160 - val_mae: 2.6874 - val_mse: 17.8160 354/354 [==============================] - 0s 278us/sample - loss: 9.5957 - mae: 2.1407 - mse: 9.5957 - val_loss: 17.8160 - val_mae: 2.6874 - val_mse: 17.8160
Epoch 23/100 Epoch 23/100
354/354 [==============================] - 0s 304us/sample - loss: 9.7569 - mae: 2.2057 - mse: 9.7569 - val_loss: 15.5761 - val_mae: 2.4537 - val_mse: 15.5761 354/354 [==============================] - 0s 304us/sample - loss: 9.7569 - mae: 2.2057 - mse: 9.7569 - val_loss: 15.5761 - val_mae: 2.4537 - val_mse: 15.5761
Epoch 24/100 Epoch 24/100
354/354 [==============================] - 0s 286us/sample - loss: 9.4873 - mae: 2.1643 - mse: 9.4873 - val_loss: 16.8661 - val_mae: 2.5735 - val_mse: 16.8661 354/354 [==============================] - 0s 286us/sample - loss: 9.4873 - mae: 2.1643 - mse: 9.4873 - val_loss: 16.8661 - val_mae: 2.5735 - val_mse: 16.8661
Epoch 25/100 Epoch 25/100
354/354 [==============================] - 0s 287us/sample - loss: 9.0956 - mae: 2.1422 - mse: 9.0956 - val_loss: 16.5815 - val_mae: 2.4812 - val_mse: 16.5815 354/354 [==============================] - 0s 287us/sample - loss: 9.0956 - mae: 2.1422 - mse: 9.0956 - val_loss: 16.5815 - val_mae: 2.4812 - val_mse: 16.5815
Epoch 26/100 Epoch 26/100
354/354 [==============================] - 0s 288us/sample - loss: 9.3352 - mae: 2.1471 - mse: 9.3352 - val_loss: 16.0146 - val_mae: 2.4404 - val_mse: 16.0146 354/354 [==============================] - 0s 288us/sample - loss: 9.3352 - mae: 2.1471 - mse: 9.3352 - val_loss: 16.0146 - val_mae: 2.4404 - val_mse: 16.0146
Epoch 27/100 Epoch 27/100
354/354 [==============================] - 0s 286us/sample - loss: 8.6794 - mae: 2.0948 - mse: 8.6794 - val_loss: 18.2565 - val_mae: 2.7272 - val_mse: 18.2565 354/354 [==============================] - 0s 286us/sample - loss: 8.6794 - mae: 2.0948 - mse: 8.6794 - val_loss: 18.2565 - val_mae: 2.7272 - val_mse: 18.2565
Epoch 28/100 Epoch 28/100
354/354 [==============================] - 0s 294us/sample - loss: 8.9854 - mae: 2.1159 - mse: 8.9854 - val_loss: 16.4515 - val_mae: 2.5282 - val_mse: 16.4515 354/354 [==============================] - 0s 294us/sample - loss: 8.9854 - mae: 2.1159 - mse: 8.9854 - val_loss: 16.4515 - val_mae: 2.5282 - val_mse: 16.4515
Epoch 29/100 Epoch 29/100
354/354 [==============================] - 0s 286us/sample - loss: 8.8348 - mae: 2.1032 - mse: 8.8348 - val_loss: 17.2604 - val_mae: 2.5932 - val_mse: 17.2604 354/354 [==============================] - 0s 286us/sample - loss: 8.8348 - mae: 2.1032 - mse: 8.8348 - val_loss: 17.2604 - val_mae: 2.5932 - val_mse: 17.2604
Epoch 30/100 Epoch 30/100
354/354 [==============================] - 0s 244us/sample - loss: 8.7365 - mae: 2.0970 - mse: 8.7365 - val_loss: 16.1155 - val_mae: 2.4509 - val_mse: 16.1155 354/354 [==============================] - 0s 244us/sample - loss: 8.7365 - mae: 2.0970 - mse: 8.7365 - val_loss: 16.1155 - val_mae: 2.4509 - val_mse: 16.1155
Epoch 31/100 Epoch 31/100
354/354 [==============================] - 0s 282us/sample - loss: 8.6290 - mae: 2.0487 - mse: 8.6290 - val_loss: 16.9125 - val_mae: 2.5010 - val_mse: 16.9125 354/354 [==============================] - 0s 282us/sample - loss: 8.6290 - mae: 2.0487 - mse: 8.6290 - val_loss: 16.9125 - val_mae: 2.5010 - val_mse: 16.9125
Epoch 32/100 Epoch 32/100
354/354 [==============================] - 0s 280us/sample - loss: 8.6531 - mae: 2.0411 - mse: 8.6531 - val_loss: 15.7585 - val_mae: 2.4288 - val_mse: 15.7585 354/354 [==============================] - 0s 280us/sample - loss: 8.6531 - mae: 2.0411 - mse: 8.6531 - val_loss: 15.7585 - val_mae: 2.4288 - val_mse: 15.7585
Epoch 33/100 Epoch 33/100
354/354 [==============================] - 0s 330us/sample - loss: 8.6551 - mae: 2.0516 - mse: 8.6551 - val_loss: 15.4765 - val_mae: 2.4073 - val_mse: 15.4765 354/354 [==============================] - 0s 330us/sample - loss: 8.6551 - mae: 2.0516 - mse: 8.6551 - val_loss: 15.4765 - val_mae: 2.4073 - val_mse: 15.4765
Epoch 34/100 Epoch 34/100
354/354 [==============================] - 0s 292us/sample - loss: 8.4218 - mae: 2.0072 - mse: 8.4218 - val_loss: 16.2900 - val_mae: 2.5081 - val_mse: 16.2900 354/354 [==============================] - 0s 292us/sample - loss: 8.4218 - mae: 2.0072 - mse: 8.4218 - val_loss: 16.2900 - val_mae: 2.5081 - val_mse: 16.2900
Epoch 35/100 Epoch 35/100
354/354 [==============================] - 0s 293us/sample - loss: 8.3149 - mae: 1.9851 - mse: 8.3149 - val_loss: 15.7184 - val_mae: 2.4337 - val_mse: 15.7184 354/354 [==============================] - 0s 293us/sample - loss: 8.3149 - mae: 1.9851 - mse: 8.3149 - val_loss: 15.7184 - val_mae: 2.4337 - val_mse: 15.7184
Epoch 36/100 Epoch 36/100
354/354 [==============================] - 0s 289us/sample - loss: 8.4496 - mae: 2.0142 - mse: 8.4496 - val_loss: 16.2760 - val_mae: 2.5489 - val_mse: 16.2760 354/354 [==============================] - 0s 289us/sample - loss: 8.4496 - mae: 2.0142 - mse: 8.4496 - val_loss: 16.2760 - val_mae: 2.5489 - val_mse: 16.2760
Epoch 37/100 Epoch 37/100
354/354 [==============================] - 0s 334us/sample - loss: 8.0962 - mae: 1.9872 - mse: 8.0962 - val_loss: 15.3895 - val_mae: 2.3543 - val_mse: 15.3895 354/354 [==============================] - 0s 334us/sample - loss: 8.0962 - mae: 1.9872 - mse: 8.0962 - val_loss: 15.3895 - val_mae: 2.3543 - val_mse: 15.3895
Epoch 38/100 Epoch 38/100
354/354 [==============================] - 0s 298us/sample - loss: 8.1599 - mae: 1.9882 - mse: 8.1599 - val_loss: 16.0081 - val_mae: 2.3977 - val_mse: 16.0081 354/354 [==============================] - 0s 298us/sample - loss: 8.1599 - mae: 1.9882 - mse: 8.1599 - val_loss: 16.0081 - val_mae: 2.3977 - val_mse: 16.0081
Epoch 39/100 Epoch 39/100
354/354 [==============================] - 0s 286us/sample - loss: 7.9958 - mae: 1.9817 - mse: 7.9958 - val_loss: 15.8999 - val_mae: 2.4583 - val_mse: 15.8999 354/354 [==============================] - 0s 286us/sample - loss: 7.9958 - mae: 1.9817 - mse: 7.9958 - val_loss: 15.8999 - val_mae: 2.4583 - val_mse: 15.8999
Epoch 40/100 Epoch 40/100
354/354 [==============================] - 0s 282us/sample - loss: 7.8666 - mae: 1.9441 - mse: 7.8666 - val_loss: 16.8131 - val_mae: 2.6931 - val_mse: 16.8131 354/354 [==============================] - 0s 282us/sample - loss: 7.8666 - mae: 1.9441 - mse: 7.8666 - val_loss: 16.8131 - val_mae: 2.6931 - val_mse: 16.8131
Epoch 41/100 Epoch 41/100
354/354 [==============================] - 0s 293us/sample - loss: 7.9312 - mae: 1.9216 - mse: 7.9312 - val_loss: 15.4608 - val_mae: 2.3995 - val_mse: 15.4608 354/354 [==============================] - 0s 293us/sample - loss: 7.9312 - mae: 1.9216 - mse: 7.9312 - val_loss: 15.4608 - val_mae: 2.3995 - val_mse: 15.4608
Epoch 42/100 Epoch 42/100
354/354 [==============================] - 0s 286us/sample - loss: 7.6752 - mae: 1.9127 - mse: 7.6752 - val_loss: 15.8675 - val_mae: 2.5118 - val_mse: 15.8675 354/354 [==============================] - 0s 286us/sample - loss: 7.6752 - mae: 1.9127 - mse: 7.6752 - val_loss: 15.8675 - val_mae: 2.5118 - val_mse: 15.8675
Epoch 43/100 Epoch 43/100
354/354 [==============================] - 0s 332us/sample - loss: 7.7535 - mae: 1.9296 - mse: 7.7535 - val_loss: 15.2040 - val_mae: 2.3731 - val_mse: 15.2040 354/354 [==============================] - 0s 332us/sample - loss: 7.7535 - mae: 1.9296 - mse: 7.7535 - val_loss: 15.2040 - val_mae: 2.3731 - val_mse: 15.2040
Epoch 44/100 Epoch 44/100
354/354 [==============================] - 0s 331us/sample - loss: 7.6188 - mae: 1.9150 - mse: 7.6188 - val_loss: 15.0409 - val_mae: 2.3680 - val_mse: 15.0409 354/354 [==============================] - 0s 331us/sample - loss: 7.6188 - mae: 1.9150 - mse: 7.6188 - val_loss: 15.0409 - val_mae: 2.3680 - val_mse: 15.0409
Epoch 45/100 Epoch 45/100
354/354 [==============================] - 0s 284us/sample - loss: 7.6286 - mae: 1.8755 - mse: 7.6286 - val_loss: 15.1650 - val_mae: 2.3595 - val_mse: 15.1650 354/354 [==============================] - 0s 284us/sample - loss: 7.6286 - mae: 1.8755 - mse: 7.6286 - val_loss: 15.1650 - val_mae: 2.3595 - val_mse: 15.1650
Epoch 46/100 Epoch 46/100
354/354 [==============================] - 0s 278us/sample - loss: 7.7937 - mae: 1.9318 - mse: 7.7937 - val_loss: 15.7196 - val_mae: 2.4218 - val_mse: 15.7196 354/354 [==============================] - 0s 278us/sample - loss: 7.7937 - mae: 1.9318 - mse: 7.7937 - val_loss: 15.7196 - val_mae: 2.4218 - val_mse: 15.7196
Epoch 47/100 Epoch 47/100
354/354 [==============================] - 0s 275us/sample - loss: 7.4244 - mae: 1.9022 - mse: 7.4244 - val_loss: 15.5651 - val_mae: 2.4811 - val_mse: 15.5651 354/354 [==============================] - 0s 275us/sample - loss: 7.4244 - mae: 1.9022 - mse: 7.4244 - val_loss: 15.5651 - val_mae: 2.4811 - val_mse: 15.5651
Epoch 48/100 Epoch 48/100
354/354 [==============================] - 0s 330us/sample - loss: 7.4042 - mae: 1.9083 - mse: 7.4042 - val_loss: 14.7377 - val_mae: 2.3598 - val_mse: 14.7377 354/354 [==============================] - 0s 330us/sample - loss: 7.4042 - mae: 1.9083 - mse: 7.4042 - val_loss: 14.7377 - val_mae: 2.3598 - val_mse: 14.7377
Epoch 49/100 Epoch 49/100
354/354 [==============================] - 0s 270us/sample - loss: 7.3230 - mae: 1.8741 - mse: 7.3230 - val_loss: 15.2313 - val_mae: 2.4210 - val_mse: 15.2313 354/354 [==============================] - 0s 270us/sample - loss: 7.3230 - mae: 1.8741 - mse: 7.3230 - val_loss: 15.2313 - val_mae: 2.4210 - val_mse: 15.2313
Epoch 50/100 Epoch 50/100
354/354 [==============================] - 0s 276us/sample - loss: 7.3075 - mae: 1.8614 - mse: 7.3075 - val_loss: 14.7584 - val_mae: 2.3305 - val_mse: 14.7584 354/354 [==============================] - 0s 276us/sample - loss: 7.3075 - mae: 1.8614 - mse: 7.3075 - val_loss: 14.7584 - val_mae: 2.3305 - val_mse: 14.7584
Epoch 51/100 Epoch 51/100
354/354 [==============================] - 0s 270us/sample - loss: 7.4376 - mae: 1.8956 - mse: 7.4376 - val_loss: 15.3226 - val_mae: 2.3742 - val_mse: 15.3226 354/354 [==============================] - 0s 270us/sample - loss: 7.4376 - mae: 1.8956 - mse: 7.4376 - val_loss: 15.3226 - val_mae: 2.3742 - val_mse: 15.3226
Epoch 52/100 Epoch 52/100
354/354 [==============================] - 0s 287us/sample - loss: 7.1467 - mae: 1.8380 - mse: 7.1467 - val_loss: 15.5150 - val_mae: 2.4291 - val_mse: 15.5150 354/354 [==============================] - 0s 287us/sample - loss: 7.1467 - mae: 1.8380 - mse: 7.1467 - val_loss: 15.5150 - val_mae: 2.4291 - val_mse: 15.5150
Epoch 53/100 Epoch 53/100
354/354 [==============================] - 0s 271us/sample - loss: 6.9376 - mae: 1.8018 - mse: 6.9376 - val_loss: 16.2807 - val_mae: 2.5440 - val_mse: 16.2807 354/354 [==============================] - 0s 271us/sample - loss: 6.9376 - mae: 1.8018 - mse: 6.9376 - val_loss: 16.2807 - val_mae: 2.5440 - val_mse: 16.2807
Epoch 54/100 Epoch 54/100
354/354 [==============================] - 0s 289us/sample - loss: 7.0885 - mae: 1.8170 - mse: 7.0885 - val_loss: 15.2975 - val_mae: 2.4258 - val_mse: 15.2975 354/354 [==============================] - 0s 289us/sample - loss: 7.0885 - mae: 1.8170 - mse: 7.0885 - val_loss: 15.2975 - val_mae: 2.4258 - val_mse: 15.2975
Epoch 55/100 Epoch 55/100
354/354 [==============================] - 0s 284us/sample - loss: 7.0596 - mae: 1.8107 - mse: 7.0596 - val_loss: 15.7460 - val_mae: 2.4825 - val_mse: 15.7460 354/354 [==============================] - 0s 284us/sample - loss: 7.0596 - mae: 1.8107 - mse: 7.0596 - val_loss: 15.7460 - val_mae: 2.4825 - val_mse: 15.7460
Epoch 56/100 Epoch 56/100
354/354 [==============================] - 0s 288us/sample - loss: 6.7812 - mae: 1.8335 - mse: 6.7812 - val_loss: 14.7849 - val_mae: 2.3684 - val_mse: 14.7849 354/354 [==============================] - 0s 288us/sample - loss: 6.7812 - mae: 1.8335 - mse: 6.7812 - val_loss: 14.7849 - val_mae: 2.3684 - val_mse: 14.7849
Epoch 57/100 Epoch 57/100
354/354 [==============================] - 0s 290us/sample - loss: 6.9172 - mae: 1.8140 - mse: 6.9172 - val_loss: 15.1139 - val_mae: 2.4500 - val_mse: 15.1139 354/354 [==============================] - 0s 290us/sample - loss: 6.9172 - mae: 1.8140 - mse: 6.9172 - val_loss: 15.1139 - val_mae: 2.4500 - val_mse: 15.1139
Epoch 58/100 Epoch 58/100
354/354 [==============================] - 0s 284us/sample - loss: 6.8010 - mae: 1.7637 - mse: 6.8010 - val_loss: 16.7211 - val_mae: 2.5769 - val_mse: 16.7211 354/354 [==============================] - 0s 284us/sample - loss: 6.8010 - mae: 1.7637 - mse: 6.8010 - val_loss: 16.7211 - val_mae: 2.5769 - val_mse: 16.7211
Epoch 59/100 Epoch 59/100
354/354 [==============================] - 0s 271us/sample - loss: 6.8954 - mae: 1.8046 - mse: 6.8954 - val_loss: 15.1101 - val_mae: 2.4743 - val_mse: 15.1101 354/354 [==============================] - 0s 271us/sample - loss: 6.8954 - mae: 1.8046 - mse: 6.8954 - val_loss: 15.1101 - val_mae: 2.4743 - val_mse: 15.1101
Epoch 60/100 Epoch 60/100
354/354 [==============================] - 0s 280us/sample - loss: 6.7740 - mae: 1.7866 - mse: 6.7740 - val_loss: 15.0811 - val_mae: 2.3838 - val_mse: 15.0811 354/354 [==============================] - 0s 280us/sample - loss: 6.7740 - mae: 1.7866 - mse: 6.7740 - val_loss: 15.0811 - val_mae: 2.3838 - val_mse: 15.0811
Epoch 61/100 Epoch 61/100
354/354 [==============================] - 0s 323us/sample - loss: 6.8996 - mae: 1.7872 - mse: 6.8996 - val_loss: 14.4203 - val_mae: 2.3199 - val_mse: 14.4203 354/354 [==============================] - 0s 323us/sample - loss: 6.8996 - mae: 1.7872 - mse: 6.8996 - val_loss: 14.4203 - val_mae: 2.3199 - val_mse: 14.4203
Epoch 62/100 Epoch 62/100
354/354 [==============================] - 0s 277us/sample - loss: 6.7188 - mae: 1.7858 - mse: 6.7188 - val_loss: 14.5972 - val_mae: 2.3610 - val_mse: 14.5972 354/354 [==============================] - 0s 277us/sample - loss: 6.7188 - mae: 1.7858 - mse: 6.7188 - val_loss: 14.5972 - val_mae: 2.3610 - val_mse: 14.5972
Epoch 63/100 Epoch 63/100
354/354 [==============================] - 0s 266us/sample - loss: 6.5708 - mae: 1.7710 - mse: 6.5708 - val_loss: 14.5145 - val_mae: 2.3563 - val_mse: 14.5145 354/354 [==============================] - 0s 266us/sample - loss: 6.5708 - mae: 1.7710 - mse: 6.5708 - val_loss: 14.5145 - val_mae: 2.3563 - val_mse: 14.5145
Epoch 64/100 Epoch 64/100
354/354 [==============================] - 0s 273us/sample - loss: 6.1133 - mae: 1.6706 - mse: 6.1133 - val_loss: 14.9870 - val_mae: 2.4017 - val_mse: 14.9870 354/354 [==============================] - 0s 273us/sample - loss: 6.1133 - mae: 1.6706 - mse: 6.1133 - val_loss: 14.9870 - val_mae: 2.4017 - val_mse: 14.9870
Epoch 65/100 Epoch 65/100
354/354 [==============================] - 0s 313us/sample - loss: 6.4980 - mae: 1.7295 - mse: 6.4980 - val_loss: 14.0636 - val_mae: 2.3661 - val_mse: 14.0636 354/354 [==============================] - 0s 313us/sample - loss: 6.4980 - mae: 1.7295 - mse: 6.4980 - val_loss: 14.0636 - val_mae: 2.3661 - val_mse: 14.0636
Epoch 66/100 Epoch 66/100
354/354 [==============================] - 0s 262us/sample - loss: 6.5237 - mae: 1.7277 - mse: 6.5237 - val_loss: 14.2366 - val_mae: 2.3318 - val_mse: 14.2366 354/354 [==============================] - 0s 262us/sample - loss: 6.5237 - mae: 1.7277 - mse: 6.5237 - val_loss: 14.2366 - val_mae: 2.3318 - val_mse: 14.2366
Epoch 67/100 Epoch 67/100
354/354 [==============================] - 0s 326us/sample - loss: 6.3067 - mae: 1.7433 - mse: 6.3067 - val_loss: 14.0032 - val_mae: 2.3350 - val_mse: 14.0032 354/354 [==============================] - 0s 326us/sample - loss: 6.3067 - mae: 1.7433 - mse: 6.3067 - val_loss: 14.0032 - val_mae: 2.3350 - val_mse: 14.0032
Epoch 68/100 Epoch 68/100
354/354 [==============================] - 0s 286us/sample - loss: 6.4447 - mae: 1.7336 - mse: 6.4447 - val_loss: 14.4271 - val_mae: 2.3149 - val_mse: 14.4271 354/354 [==============================] - 0s 286us/sample - loss: 6.4447 - mae: 1.7336 - mse: 6.4447 - val_loss: 14.4271 - val_mae: 2.3149 - val_mse: 14.4271
Epoch 69/100 Epoch 69/100
354/354 [==============================] - 0s 332us/sample - loss: 6.3821 - mae: 1.7012 - mse: 6.3821 - val_loss: 13.9716 - val_mae: 2.3141 - val_mse: 13.9716 354/354 [==============================] - 0s 332us/sample - loss: 6.3821 - mae: 1.7012 - mse: 6.3821 - val_loss: 13.9716 - val_mae: 2.3141 - val_mse: 13.9716
Epoch 70/100 Epoch 70/100
354/354 [==============================] - 0s 251us/sample - loss: 6.3734 - mae: 1.7080 - mse: 6.3734 - val_loss: 14.9184 - val_mae: 2.4716 - val_mse: 14.9184 354/354 [==============================] - 0s 251us/sample - loss: 6.3734 - mae: 1.7080 - mse: 6.3734 - val_loss: 14.9184 - val_mae: 2.4716 - val_mse: 14.9184
Epoch 71/100 Epoch 71/100
354/354 [==============================] - 0s 321us/sample - loss: 6.4273 - mae: 1.7281 - mse: 6.4273 - val_loss: 13.8686 - val_mae: 2.3176 - val_mse: 13.8686 354/354 [==============================] - 0s 321us/sample - loss: 6.4273 - mae: 1.7281 - mse: 6.4273 - val_loss: 13.8686 - val_mae: 2.3176 - val_mse: 13.8686
Epoch 72/100 Epoch 72/100
354/354 [==============================] - 0s 285us/sample - loss: 6.2473 - mae: 1.6967 - mse: 6.2473 - val_loss: 14.2249 - val_mae: 2.3450 - val_mse: 14.2249 354/354 [==============================] - 0s 285us/sample - loss: 6.2473 - mae: 1.6967 - mse: 6.2473 - val_loss: 14.2249 - val_mae: 2.3450 - val_mse: 14.2249
Epoch 73/100 Epoch 73/100
354/354 [==============================] - 0s 286us/sample - loss: 6.3427 - mae: 1.7034 - mse: 6.3427 - val_loss: 14.3159 - val_mae: 2.3431 - val_mse: 14.3159 354/354 [==============================] - 0s 286us/sample - loss: 6.3427 - mae: 1.7034 - mse: 6.3427 - val_loss: 14.3159 - val_mae: 2.3431 - val_mse: 14.3159
Epoch 74/100 Epoch 74/100
354/354 [==============================] - 0s 287us/sample - loss: 6.0929 - mae: 1.6752 - mse: 6.0929 - val_loss: 14.2151 - val_mae: 2.3644 - val_mse: 14.2151 354/354 [==============================] - 0s 287us/sample - loss: 6.0929 - mae: 1.6752 - mse: 6.0929 - val_loss: 14.2151 - val_mae: 2.3644 - val_mse: 14.2151
Epoch 75/100 Epoch 75/100
354/354 [==============================] - 0s 289us/sample - loss: 6.1445 - mae: 1.6985 - mse: 6.1445 - val_loss: 14.8251 - val_mae: 2.4202 - val_mse: 14.8251 354/354 [==============================] - 0s 289us/sample - loss: 6.1445 - mae: 1.6985 - mse: 6.1445 - val_loss: 14.8251 - val_mae: 2.4202 - val_mse: 14.8251
Epoch 76/100 Epoch 76/100
354/354 [==============================] - 0s 311us/sample - loss: 6.2184 - mae: 1.6867 - mse: 6.2184 - val_loss: 14.0596 - val_mae: 2.3274 - val_mse: 14.0596 354/354 [==============================] - 0s 311us/sample - loss: 6.2184 - mae: 1.6867 - mse: 6.2184 - val_loss: 14.0596 - val_mae: 2.3274 - val_mse: 14.0596
Epoch 77/100 Epoch 77/100
354/354 [==============================] - 0s 340us/sample - loss: 6.1201 - mae: 1.6785 - mse: 6.1201 - val_loss: 13.4886 - val_mae: 2.2769 - val_mse: 13.4886 354/354 [==============================] - 0s 340us/sample - loss: 6.1201 - mae: 1.6785 - mse: 6.1201 - val_loss: 13.4886 - val_mae: 2.2769 - val_mse: 13.4886
Epoch 78/100 Epoch 78/100
354/354 [==============================] - 0s 286us/sample - loss: 5.9001 - mae: 1.6716 - mse: 5.9001 - val_loss: 14.0295 - val_mae: 2.3214 - val_mse: 14.0295 354/354 [==============================] - 0s 286us/sample - loss: 5.9001 - mae: 1.6716 - mse: 5.9001 - val_loss: 14.0295 - val_mae: 2.3214 - val_mse: 14.0295
Epoch 79/100 Epoch 79/100
354/354 [==============================] - 0s 284us/sample - loss: 6.0389 - mae: 1.6783 - mse: 6.0389 - val_loss: 14.0250 - val_mae: 2.3245 - val_mse: 14.0250 354/354 [==============================] - 0s 284us/sample - loss: 6.0389 - mae: 1.6783 - mse: 6.0389 - val_loss: 14.0250 - val_mae: 2.3245 - val_mse: 14.0250
Epoch 80/100 Epoch 80/100
354/354 [==============================] - 0s 288us/sample - loss: 5.8268 - mae: 1.6458 - mse: 5.8268 - val_loss: 15.2746 - val_mae: 2.4834 - val_mse: 15.2746 354/354 [==============================] - 0s 288us/sample - loss: 5.8268 - mae: 1.6458 - mse: 5.8268 - val_loss: 15.2746 - val_mae: 2.4834 - val_mse: 15.2746
Epoch 81/100 Epoch 81/100
354/354 [==============================] - 0s 284us/sample - loss: 5.8671 - mae: 1.6680 - mse: 5.8671 - val_loss: 14.4935 - val_mae: 2.4353 - val_mse: 14.4935 354/354 [==============================] - 0s 284us/sample - loss: 5.8671 - mae: 1.6680 - mse: 5.8671 - val_loss: 14.4935 - val_mae: 2.4353 - val_mse: 14.4935
Epoch 82/100 Epoch 82/100
354/354 [==============================] - 0s 274us/sample - loss: 5.8115 - mae: 1.6742 - mse: 5.8115 - val_loss: 14.6922 - val_mae: 2.3631 - val_mse: 14.6922 354/354 [==============================] - 0s 274us/sample - loss: 5.8115 - mae: 1.6742 - mse: 5.8115 - val_loss: 14.6922 - val_mae: 2.3631 - val_mse: 14.6922
Epoch 83/100 Epoch 83/100
354/354 [==============================] - 0s 331us/sample - loss: 5.8561 - mae: 1.6727 - mse: 5.8561 - val_loss: 13.4236 - val_mae: 2.2982 - val_mse: 13.4236 354/354 [==============================] - 0s 331us/sample - loss: 5.8561 - mae: 1.6727 - mse: 5.8561 - val_loss: 13.4236 - val_mae: 2.2982 - val_mse: 13.4236
Epoch 84/100 Epoch 84/100
354/354 [==============================] - 0s 290us/sample - loss: 5.7500 - mae: 1.5833 - mse: 5.7500 - val_loss: 14.4867 - val_mae: 2.4330 - val_mse: 14.4867 354/354 [==============================] - 0s 290us/sample - loss: 5.7500 - mae: 1.5833 - mse: 5.7500 - val_loss: 14.4867 - val_mae: 2.4330 - val_mse: 14.4867
Epoch 85/100 Epoch 85/100
354/354 [==============================] - 0s 286us/sample - loss: 5.6700 - mae: 1.6435 - mse: 5.6700 - val_loss: 13.9873 - val_mae: 2.3614 - val_mse: 13.9873 354/354 [==============================] - 0s 286us/sample - loss: 5.6700 - mae: 1.6435 - mse: 5.6700 - val_loss: 13.9873 - val_mae: 2.3614 - val_mse: 13.9873
Epoch 86/100 Epoch 86/100
354/354 [==============================] - 0s 268us/sample - loss: 5.6816 - mae: 1.6524 - mse: 5.6816 - val_loss: 13.4864 - val_mae: 2.3505 - val_mse: 13.4864 354/354 [==============================] - 0s 268us/sample - loss: 5.6816 - mae: 1.6524 - mse: 5.6816 - val_loss: 13.4864 - val_mae: 2.3505 - val_mse: 13.4864
Epoch 87/100 Epoch 87/100
354/354 [==============================] - 0s 267us/sample - loss: 5.5838 - mae: 1.6220 - mse: 5.5838 - val_loss: 15.4727 - val_mae: 2.5215 - val_mse: 15.4727 354/354 [==============================] - 0s 267us/sample - loss: 5.5838 - mae: 1.6220 - mse: 5.5838 - val_loss: 15.4727 - val_mae: 2.5215 - val_mse: 15.4727
Epoch 88/100 Epoch 88/100
354/354 [==============================] - 0s 284us/sample - loss: 5.6117 - mae: 1.6208 - mse: 5.6117 - val_loss: 13.6392 - val_mae: 2.3150 - val_mse: 13.6392 354/354 [==============================] - 0s 284us/sample - loss: 5.6117 - mae: 1.6208 - mse: 5.6117 - val_loss: 13.6392 - val_mae: 2.3150 - val_mse: 13.6392
Epoch 89/100 Epoch 89/100
354/354 [==============================] - 0s 324us/sample - loss: 5.5648 - mae: 1.6051 - mse: 5.5648 - val_loss: 13.2082 - val_mae: 2.2858 - val_mse: 13.2082 354/354 [==============================] - 0s 324us/sample - loss: 5.5648 - mae: 1.6051 - mse: 5.5648 - val_loss: 13.2082 - val_mae: 2.2858 - val_mse: 13.2082
Epoch 90/100 Epoch 90/100
354/354 [==============================] - 0s 288us/sample - loss: 5.6019 - mae: 1.5946 - mse: 5.6019 - val_loss: 13.7882 - val_mae: 2.3677 - val_mse: 13.7882 354/354 [==============================] - 0s 288us/sample - loss: 5.6019 - mae: 1.5946 - mse: 5.6019 - val_loss: 13.7882 - val_mae: 2.3677 - val_mse: 13.7882
Epoch 91/100 Epoch 91/100
354/354 [==============================] - 0s 336us/sample - loss: 5.4979 - mae: 1.6000 - mse: 5.4979 - val_loss: 12.9619 - val_mae: 2.2898 - val_mse: 12.9619 354/354 [==============================] - 0s 336us/sample - loss: 5.4979 - mae: 1.6000 - mse: 5.4979 - val_loss: 12.9619 - val_mae: 2.2898 - val_mse: 12.9619
Epoch 92/100 Epoch 92/100
354/354 [==============================] - 0s 294us/sample - loss: 5.4595 - mae: 1.5815 - mse: 5.4595 - val_loss: 13.8617 - val_mae: 2.3735 - val_mse: 13.8617 354/354 [==============================] - 0s 294us/sample - loss: 5.4595 - mae: 1.5815 - mse: 5.4595 - val_loss: 13.8617 - val_mae: 2.3735 - val_mse: 13.8617
Epoch 93/100 Epoch 93/100
354/354 [==============================] - 0s 321us/sample - loss: 5.1999 - mae: 1.6043 - mse: 5.1999 - val_loss: 12.9011 - val_mae: 2.3053 - val_mse: 12.9011 354/354 [==============================] - 0s 321us/sample - loss: 5.1999 - mae: 1.6043 - mse: 5.1999 - val_loss: 12.9011 - val_mae: 2.3053 - val_mse: 12.9011
Epoch 94/100 Epoch 94/100
354/354 [==============================] - 0s 268us/sample - loss: 5.2630 - mae: 1.5463 - mse: 5.2630 - val_loss: 14.3610 - val_mae: 2.4348 - val_mse: 14.3610 354/354 [==============================] - 0s 268us/sample - loss: 5.2630 - mae: 1.5463 - mse: 5.2630 - val_loss: 14.3610 - val_mae: 2.4348 - val_mse: 14.3610
Epoch 95/100 Epoch 95/100
354/354 [==============================] - 0s 300us/sample - loss: 5.3272 - mae: 1.6047 - mse: 5.3272 - val_loss: 12.9650 - val_mae: 2.2989 - val_mse: 12.9650 354/354 [==============================] - 0s 300us/sample - loss: 5.3272 - mae: 1.6047 - mse: 5.3272 - val_loss: 12.9650 - val_mae: 2.2989 - val_mse: 12.9650
Epoch 96/100 Epoch 96/100
354/354 [==============================] - 0s 283us/sample - loss: 5.3137 - mae: 1.5796 - mse: 5.3137 - val_loss: 13.9091 - val_mae: 2.3669 - val_mse: 13.9091 354/354 [==============================] - 0s 283us/sample - loss: 5.3137 - mae: 1.5796 - mse: 5.3137 - val_loss: 13.9091 - val_mae: 2.3669 - val_mse: 13.9091
Epoch 97/100 Epoch 97/100
354/354 [==============================] - 0s 280us/sample - loss: 5.2891 - mae: 1.5773 - mse: 5.2891 - val_loss: 13.2578 - val_mae: 2.3012 - val_mse: 13.2578 354/354 [==============================] - 0s 280us/sample - loss: 5.2891 - mae: 1.5773 - mse: 5.2891 - val_loss: 13.2578 - val_mae: 2.3012 - val_mse: 13.2578
Epoch 98/100 Epoch 98/100
354/354 [==============================] - 0s 264us/sample - loss: 5.3977 - mae: 1.5920 - mse: 5.3977 - val_loss: 13.8690 - val_mae: 2.4075 - val_mse: 13.8690 354/354 [==============================] - 0s 264us/sample - loss: 5.3977 - mae: 1.5920 - mse: 5.3977 - val_loss: 13.8690 - val_mae: 2.4075 - val_mse: 13.8690
Epoch 99/100 Epoch 99/100
354/354 [==============================] - 0s 273us/sample - loss: 5.3071 - mae: 1.5391 - mse: 5.3071 - val_loss: 12.9043 - val_mae: 2.2816 - val_mse: 12.9043 354/354 [==============================] - 0s 273us/sample - loss: 5.3071 - mae: 1.5391 - mse: 5.3071 - val_loss: 12.9043 - val_mae: 2.2816 - val_mse: 12.9043
Epoch 100/100 Epoch 100/100
354/354 [==============================] - 0s 318us/sample - loss: 5.2748 - mae: 1.5458 - mse: 5.2748 - val_loss: 12.6915 - val_mae: 2.2683 - val_mse: 12.6915 354/354 [==============================] - 0s 318us/sample - loss: 5.2748 - mae: 1.5458 - mse: 5.2748 - val_loss: 12.6915 - val_mae: 2.2683 - val_mse: 12.6915
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 6 - Evaluate ## Step 6 - Evaluate
### 6.1 - Model evaluation ### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions) MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k. A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
score = model.evaluate(x_test, y_test, verbose=0) score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0])) print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1])) print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2])) print('x_test / mse : {:5.4f}'.format(score[2]))
``` ```
%% Output %% Output
x_test / loss : 12.6915 x_test / loss : 12.6915
x_test / mae : 2.2683 x_test / mae : 2.2683
x_test / mse : 12.6915 x_test / mse : 12.6915
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 6.2 - Training history ### 6.2 - Training history
What was the best result during our training ? What was the best result during our training ?
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) ) print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
``` ```
%% Output %% Output
min( val_mae ) : 2.2683 min( val_mae ) : 2.2683
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
ooo.plot_history(history, plot={'MSE' :['mse', 'val_mse'], ooo.plot_history(history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'], 'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']}) 'LOSS':['loss','val_loss']})
``` ```
%% Output %% Output
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Step 7 - Restore a model : ## Step 7 - Restore a model :
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 7.1 - Reload model ### 7.1 - Reload model
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
loaded_model = tf.keras.models.load_model('./run/models/best_model.h5') loaded_model = tf.keras.models.load_model('./run/models/best_model.h5')
loaded_model.summary() loaded_model.summary()
print("Loaded.") print("Loaded.")
``` ```
%% Output %% Output
Model: "sequential" Model: "sequential"
_________________________________________________________________ _________________________________________________________________
Layer (type) Output Shape Param # Layer (type) Output Shape Param #
================================================================= =================================================================
Dense_n1 (Dense) (None, 64) 896 Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________ _________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160 Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________ _________________________________________________________________
Output (Dense) (None, 1) 65 Output (Dense) (None, 1) 65
================================================================= =================================================================
Total params: 5,121 Total params: 5,121
Trainable params: 5,121 Trainable params: 5,121
Non-trainable params: 0 Non-trainable params: 0
_________________________________________________________________ _________________________________________________________________
Loaded. Loaded.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 7.2 - Evaluate it : ### 7.2 - Evaluate it :
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
score = loaded_model.evaluate(x_test, y_test, verbose=0) score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0])) print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1])) print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2])) print('x_test / mse : {:5.4f}'.format(score[2]))
``` ```
%% Output %% Output
x_test / loss : 12.6915 x_test / loss : 12.6915
x_test / mae : 2.2683 x_test / mae : 2.2683
x_test / mse : 12.6915 x_test / mse : 12.6915
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### 7.3 - Make a prediction ### 7.3 - Make a prediction
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
my_data = [ 1.26425925, -0.48522739, 1.0436489 , -0.23112788, 1.37120745, my_data = [ 1.26425925, -0.48522739, 1.0436489 , -0.23112788, 1.37120745,
-2.14308942, 1.13489104, -1.06802005, 1.71189006, 1.57042287, -2.14308942, 1.13489104, -1.06802005, 1.71189006, 1.57042287,
0.77859951, 0.14769795, 2.7585581 ] 0.77859951, 0.14769795, 2.7585581 ]
real_price = 10.4 real_price = 10.4
my_data=np.array(my_data).reshape(1,13) my_data=np.array(my_data).reshape(1,13)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
predictions = loaded_model.predict( my_data ) predictions = loaded_model.predict( my_data )
print("Prédiction : {:.2f} K$ Reality : {:.2f} K$".format(predictions[0][0], real_price)) print("Prediction : {:.2f} K$ Reality : {:.2f} K$".format(predictions[0][0], real_price))
``` ```
%% Output %% Output
Prédiction : 10.75 K$ Reality : 10.40 K$ Prédiction : 10.75 K$ Reality : 10.40 K$
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
--- ---
<img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img> <img width="80px" src="../fidle/img/00-Fidle-logo-01.svg"></img>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment