Skip to content
Snippets Groups Projects
Commit 9c3c7a0b authored by Jean-Luc Parouty's avatar Jean-Luc Parouty
Browse files

test markdown comment

Former-commit-id: 00c17a25
parent 3128f4f6
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
![header1](../fidle/img/00-Fidle-header-01.png)
# Deep Neural Network (DNN) - BHPD dataset
<!-- INDEX : Simple regression with a Dense Neural Network (DNN) - BHPD dataset -->
A very simple and classic example of **regression** :
## Objectives :
Predicts **housing prices** from a set of house features.
- Predicts **housing prices** from a set of house features.
- Understanding the principle and the architecture of a regression with a dense neural network
The **[Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)** consists of price of houses in various places in Boston.
Alongside with price, the dataset also provide information such as Crime, areas of non-retail business in the town,
age of people who own the house and many other attributes...
What we're going to do:
- Retrieve data
- Preparing the data
- Build a model
- Train the model
- Evaluate the result
%% Cell type:markdown id: tags:
## Step 1 - Import and init
%% Cell type:code id: tags:
``` python
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os,sys
from IPython.display import display, Markdown
from importlib import reload
sys.path.append('..')
import fidle.pwk as ooo
ooo.init()
os.makedirs('./run/models', mode=0o750, exist_ok=True)
```
%% Output
FIDLE 2020 - Practical Work Module
Version : 0.2.9
Run time : Tuesday 18 February 2020, 14:42:02
TensorFlow version : 2.0.0
Keras version : 2.2.4-tf
%% Cell type:markdown id: tags:
## Step 2 - Retrieve data
### 2.1 - Option 1 : From Keras
Boston housing is a famous historic dataset, so we can get it directly from [Keras datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets)
%% Cell type:raw id: tags:
(x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data(test_split=0.2, seed=113)
%% Cell type:markdown id: tags:
### 2.2 - Option 2 : From a csv file
More fun !
%% Cell type:code id: tags:
``` python
data = pd.read_csv('./data/BostonHousing.csv', header=0)
display(data.head(5).style.format("{0:.2f}"))
print('Données manquantes : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
```
%% Output
Données manquantes : 0 Shape is : (506, 14)
%% Cell type:markdown id: tags:
## Step 3 - Preparing the data
### 3.1 - Split data
We will use 70% of the data for training and 30% for validation.
x will be input data and y the expected output
%% Cell type:code id: tags:
``` python
# ---- Split => train, test
#
data_train = data.sample(frac=0.7, axis=0)
data_test = data.drop(data_train.index)
# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('medv', axis=1)
y_train = data_train['medv']
x_test = data_test.drop('medv', axis=1)
y_test = data_test['medv']
print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
```
%% Output
Original data shape was : (506, 14)
x_train : (354, 13) y_train : (354,)
x_test : (152, 13) y_test : (152,)
%% Cell type:markdown id: tags:
### 3.2 - Data normalization
**Note :**
- All input data must be normalized, train and test.
- To do this we will **subtract the mean** and **divide by the standard deviation**.
- But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data.
%% Cell type:code id: tags:
``` python
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
```
%% Output
%% Cell type:markdown id: tags:
## Step 4 - Build a model
About informations about :
- [Optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers)
- [Activation](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
- [Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
- [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)
%% Cell type:code id: tags:
``` python
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop',
loss = 'mse',
metrics = ['mae', 'mse'] )
return model
```
%% Cell type:markdown id: tags:
## Step 5 - Train the model
### 5.1 - Get it
%% Cell type:code id: tags:
``` python
model=get_model_v1( (13,) )
model.summary()
keras.utils.plot_model( model, to_file='./run/model.png', show_shapes=True, show_layer_names=True, dpi=96)
```
%% Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Dense_n1 (Dense) (None, 64) 896
_________________________________________________________________
Dense_n2 (Dense) (None, 64) 4160
_________________________________________________________________
Output (Dense) (None, 1) 65
=================================================================
Total params: 5,121
Trainable params: 5,121
Non-trainable params: 0
_________________________________________________________________
<IPython.core.display.Image object>
%% Cell type:markdown id: tags:
### 5.2 - Train it
%% Cell type:code id: tags:
``` python
history = model.fit(x_train,
y_train,
epochs = 100,
batch_size = 10,
verbose = 1,
validation_data = (x_test, y_test))
```
%% Output
Train on 354 samples, validate on 152 samples
Epoch 1/100
354/354 [==============================] - 1s 2ms/sample - loss: 414.5603 - mae: 18.2577 - mse: 414.5602 - val_loss: 266.3728 - val_mae: 13.9913 - val_mse: 266.3728
Epoch 2/100
354/354 [==============================] - 0s 190us/sample - loss: 165.4507 - mae: 10.4618 - mse: 165.4507 - val_loss: 74.4125 - val_mae: 6.0372 - val_mse: 74.4125
Epoch 3/100
354/354 [==============================] - 0s 187us/sample - loss: 54.2313 - mae: 5.3763 - mse: 54.2313 - val_loss: 47.0203 - val_mae: 4.7399 - val_mse: 47.0203
Epoch 4/100
354/354 [==============================] - 0s 166us/sample - loss: 32.3303 - mae: 4.2632 - mse: 32.3303 - val_loss: 38.0120 - val_mae: 4.2484 - val_mse: 38.0120
Epoch 5/100
354/354 [==============================] - 0s 153us/sample - loss: 25.3763 - mae: 3.7745 - mse: 25.3763 - val_loss: 32.4707 - val_mae: 3.8465 - val_mse: 32.4707
Epoch 6/100
354/354 [==============================] - 0s 153us/sample - loss: 22.2331 - mae: 3.4720 - mse: 22.2331 - val_loss: 29.6142 - val_mae: 3.4844 - val_mse: 29.6142
Epoch 7/100
354/354 [==============================] - 0s 154us/sample - loss: 19.7834 - mae: 3.2245 - mse: 19.7834 - val_loss: 27.1649 - val_mae: 3.5465 - val_mse: 27.1649
Epoch 8/100
354/354 [==============================] - 0s 155us/sample - loss: 18.0991 - mae: 3.0669 - mse: 18.0991 - val_loss: 26.0093 - val_mae: 3.5617 - val_mse: 26.0093
Epoch 9/100
354/354 [==============================] - 0s 161us/sample - loss: 16.9247 - mae: 2.9184 - mse: 16.9247 - val_loss: 23.2549 - val_mae: 3.3243 - val_mse: 23.2549
Epoch 10/100
354/354 [==============================] - 0s 150us/sample - loss: 16.0827 - mae: 2.8116 - mse: 16.0827 - val_loss: 21.1365 - val_mae: 3.0248 - val_mse: 21.1365
Epoch 11/100
354/354 [==============================] - 0s 170us/sample - loss: 15.0334 - mae: 2.7214 - mse: 15.0334 - val_loss: 20.0163 - val_mae: 2.9800 - val_mse: 20.0163
Epoch 12/100
354/354 [==============================] - 0s 180us/sample - loss: 14.4011 - mae: 2.6949 - mse: 14.4011 - val_loss: 19.8958 - val_mae: 2.9262 - val_mse: 19.8958
Epoch 13/100
354/354 [==============================] - 0s 184us/sample - loss: 13.9168 - mae: 2.5674 - mse: 13.9168 - val_loss: 18.5729 - val_mae: 2.7302 - val_mse: 18.5729
Epoch 14/100
354/354 [==============================] - 0s 161us/sample - loss: 13.5575 - mae: 2.5442 - mse: 13.5575 - val_loss: 17.8812 - val_mae: 2.6748 - val_mse: 17.8812
Epoch 15/100
354/354 [==============================] - 0s 166us/sample - loss: 12.8689 - mae: 2.4779 - mse: 12.8689 - val_loss: 18.9649 - val_mae: 2.7560 - val_mse: 18.9649
Epoch 16/100
354/354 [==============================] - 0s 159us/sample - loss: 12.6470 - mae: 2.4670 - mse: 12.6470 - val_loss: 16.5834 - val_mae: 2.6016 - val_mse: 16.5834
Epoch 17/100
354/354 [==============================] - 0s 159us/sample - loss: 12.3566 - mae: 2.4280 - mse: 12.3566 - val_loss: 16.7371 - val_mae: 2.6670 - val_mse: 16.7371
Epoch 18/100
354/354 [==============================] - 0s 158us/sample - loss: 12.3328 - mae: 2.4060 - mse: 12.3328 - val_loss: 16.3754 - val_mae: 2.6027 - val_mse: 16.3754
Epoch 19/100
354/354 [==============================] - 0s 152us/sample - loss: 11.8357 - mae: 2.3106 - mse: 11.8357 - val_loss: 16.1015 - val_mae: 2.6255 - val_mse: 16.1015
Epoch 20/100
354/354 [==============================] - 0s 163us/sample - loss: 11.6722 - mae: 2.3482 - mse: 11.6722 - val_loss: 16.1405 - val_mae: 2.6889 - val_mse: 16.1405
Epoch 21/100
354/354 [==============================] - 0s 175us/sample - loss: 11.2774 - mae: 2.3344 - mse: 11.2774 - val_loss: 15.2110 - val_mae: 2.5038 - val_mse: 15.2110
Epoch 22/100
354/354 [==============================] - 0s 180us/sample - loss: 11.2491 - mae: 2.3055 - mse: 11.2491 - val_loss: 15.4745 - val_mae: 2.4494 - val_mse: 15.4744
Epoch 23/100
354/354 [==============================] - 0s 187us/sample - loss: 10.9102 - mae: 2.2171 - mse: 10.9102 - val_loss: 15.1145 - val_mae: 2.4282 - val_mse: 15.1145
Epoch 24/100
354/354 [==============================] - 0s 168us/sample - loss: 10.7952 - mae: 2.2533 - mse: 10.7952 - val_loss: 14.3789 - val_mae: 2.3683 - val_mse: 14.3789
Epoch 25/100
354/354 [==============================] - 0s 171us/sample - loss: 10.7250 - mae: 2.2489 - mse: 10.7250 - val_loss: 15.1102 - val_mae: 2.3422 - val_mse: 15.1102
Epoch 26/100
354/354 [==============================] - 0s 158us/sample - loss: 10.4010 - mae: 2.1702 - mse: 10.4010 - val_loss: 14.3260 - val_mae: 2.3176 - val_mse: 14.3260
Epoch 27/100
354/354 [==============================] - 0s 149us/sample - loss: 10.1442 - mae: 2.1797 - mse: 10.1442 - val_loss: 13.6694 - val_mae: 2.3864 - val_mse: 13.6694
Epoch 28/100
354/354 [==============================] - 0s 168us/sample - loss: 10.1391 - mae: 2.1809 - mse: 10.1391 - val_loss: 14.0177 - val_mae: 2.3467 - val_mse: 14.0177
Epoch 29/100
354/354 [==============================] - 0s 149us/sample - loss: 9.9119 - mae: 2.1267 - mse: 9.9119 - val_loss: 14.0739 - val_mae: 2.4617 - val_mse: 14.0739
Epoch 30/100
354/354 [==============================] - 0s 164us/sample - loss: 10.0176 - mae: 2.1669 - mse: 10.0176 - val_loss: 13.5116 - val_mae: 2.3158 - val_mse: 13.5116
Epoch 31/100
354/354 [==============================] - 0s 189us/sample - loss: 9.8259 - mae: 2.1407 - mse: 9.8259 - val_loss: 13.7364 - val_mae: 2.3531 - val_mse: 13.7364
Epoch 32/100
354/354 [==============================] - 0s 178us/sample - loss: 9.4495 - mae: 2.0922 - mse: 9.4495 - val_loss: 14.1936 - val_mae: 2.3887 - val_mse: 14.1936
Epoch 33/100
354/354 [==============================] - 0s 164us/sample - loss: 9.6721 - mae: 2.0870 - mse: 9.6721 - val_loss: 13.4267 - val_mae: 2.3508 - val_mse: 13.4267
Epoch 34/100
354/354 [==============================] - 0s 167us/sample - loss: 9.1042 - mae: 2.0644 - mse: 9.1042 - val_loss: 13.3821 - val_mae: 2.4709 - val_mse: 13.3821
Epoch 35/100
354/354 [==============================] - 0s 155us/sample - loss: 9.0129 - mae: 2.0482 - mse: 9.0129 - val_loss: 14.2184 - val_mae: 2.2754 - val_mse: 14.2184
Epoch 36/100
354/354 [==============================] - 0s 160us/sample - loss: 9.2470 - mae: 2.0661 - mse: 9.2470 - val_loss: 14.3466 - val_mae: 2.5561 - val_mse: 14.3466
Epoch 37/100
354/354 [==============================] - 0s 169us/sample - loss: 9.1695 - mae: 2.0766 - mse: 9.1695 - val_loss: 13.3818 - val_mae: 2.2373 - val_mse: 13.3818
Epoch 38/100
354/354 [==============================] - 0s 165us/sample - loss: 9.1663 - mae: 2.0617 - mse: 9.1663 - val_loss: 14.7461 - val_mae: 2.5061 - val_mse: 14.7461
Epoch 39/100
354/354 [==============================] - 0s 159us/sample - loss: 8.7273 - mae: 2.0208 - mse: 8.7273 - val_loss: 12.5890 - val_mae: 2.3037 - val_mse: 12.5890
Epoch 40/100
354/354 [==============================] - 0s 166us/sample - loss: 8.9038 - mae: 2.0352 - mse: 8.9038 - val_loss: 12.9754 - val_mae: 2.2079 - val_mse: 12.9754
Epoch 41/100
354/354 [==============================] - 0s 153us/sample - loss: 8.6155 - mae: 2.0267 - mse: 8.6155 - val_loss: 13.9239 - val_mae: 2.3525 - val_mse: 13.9239
Epoch 42/100
354/354 [==============================] - 0s 163us/sample - loss: 8.5479 - mae: 2.0170 - mse: 8.5479 - val_loss: 13.6362 - val_mae: 2.2694 - val_mse: 13.6362
Epoch 43/100
354/354 [==============================] - 0s 165us/sample - loss: 8.7087 - mae: 2.0062 - mse: 8.7087 - val_loss: 13.1138 - val_mae: 2.2386 - val_mse: 13.1138
Epoch 44/100
354/354 [==============================] - 0s 160us/sample - loss: 8.3942 - mae: 1.9622 - mse: 8.3942 - val_loss: 12.3461 - val_mae: 2.2337 - val_mse: 12.3461
Epoch 45/100
354/354 [==============================] - 0s 168us/sample - loss: 8.4101 - mae: 2.0098 - mse: 8.4101 - val_loss: 13.2116 - val_mae: 2.2682 - val_mse: 13.2116
Epoch 46/100
354/354 [==============================] - 0s 156us/sample - loss: 8.3264 - mae: 1.9483 - mse: 8.3264 - val_loss: 12.5519 - val_mae: 2.4063 - val_mse: 12.5519
Epoch 47/100
354/354 [==============================] - 0s 158us/sample - loss: 8.1445 - mae: 1.9549 - mse: 8.1445 - val_loss: 12.1838 - val_mae: 2.2591 - val_mse: 12.1838
Epoch 48/100
354/354 [==============================] - 0s 156us/sample - loss: 8.0389 - mae: 1.9304 - mse: 8.0389 - val_loss: 12.6978 - val_mae: 2.1907 - val_mse: 12.6978
Epoch 49/100
354/354 [==============================] - 0s 164us/sample - loss: 8.0705 - mae: 1.9493 - mse: 8.0705 - val_loss: 12.4833 - val_mae: 2.4720 - val_mse: 12.4833
Epoch 50/100
354/354 [==============================] - 0s 158us/sample - loss: 8.1872 - mae: 1.9630 - mse: 8.1872 - val_loss: 12.0043 - val_mae: 2.2610 - val_mse: 12.0043
Epoch 51/100
354/354 [==============================] - 0s 158us/sample - loss: 8.0357 - mae: 1.8946 - mse: 8.0357 - val_loss: 11.3982 - val_mae: 2.1770 - val_mse: 11.3982
Epoch 52/100
354/354 [==============================] - 0s 162us/sample - loss: 7.6882 - mae: 1.8951 - mse: 7.6882 - val_loss: 13.0714 - val_mae: 2.4109 - val_mse: 13.0714
Epoch 53/100
354/354 [==============================] - 0s 162us/sample - loss: 7.9639 - mae: 1.9103 - mse: 7.9639 - val_loss: 12.4297 - val_mae: 2.2996 - val_mse: 12.4297
Epoch 54/100
354/354 [==============================] - 0s 183us/sample - loss: 7.7929 - mae: 1.8971 - mse: 7.7929 - val_loss: 11.9751 - val_mae: 2.2491 - val_mse: 11.9751
Epoch 55/100
354/354 [==============================] - 0s 185us/sample - loss: 7.4411 - mae: 1.8631 - mse: 7.4411 - val_loss: 11.3761 - val_mae: 2.3416 - val_mse: 11.3761
Epoch 56/100
354/354 [==============================] - 0s 186us/sample - loss: 7.6105 - mae: 1.9111 - mse: 7.6105 - val_loss: 12.4939 - val_mae: 2.4095 - val_mse: 12.4939
Epoch 57/100
354/354 [==============================] - 0s 190us/sample - loss: 7.5013 - mae: 1.9146 - mse: 7.5013 - val_loss: 11.6668 - val_mae: 2.1468 - val_mse: 11.6668
Epoch 58/100
354/354 [==============================] - 0s 195us/sample - loss: 7.4096 - mae: 1.8515 - mse: 7.4096 - val_loss: 13.8000 - val_mae: 2.5222 - val_mse: 13.8000
Epoch 59/100
354/354 [==============================] - 0s 180us/sample - loss: 7.2263 - mae: 1.8241 - mse: 7.2263 - val_loss: 10.8964 - val_mae: 2.2130 - val_mse: 10.8964
Epoch 60/100
354/354 [==============================] - 0s 161us/sample - loss: 7.1773 - mae: 1.8526 - mse: 7.1773 - val_loss: 10.7862 - val_mae: 2.1088 - val_mse: 10.7862
Epoch 61/100
354/354 [==============================] - 0s 165us/sample - loss: 7.0812 - mae: 1.8308 - mse: 7.0812 - val_loss: 10.8147 - val_mae: 2.3209 - val_mse: 10.8147
Epoch 62/100
354/354 [==============================] - 0s 155us/sample - loss: 7.2235 - mae: 1.8367 - mse: 7.2235 - val_loss: 11.0399 - val_mae: 2.2583 - val_mse: 11.0399
Epoch 63/100
354/354 [==============================] - 0s 155us/sample - loss: 7.0341 - mae: 1.8172 - mse: 7.0341 - val_loss: 10.9894 - val_mae: 2.1429 - val_mse: 10.9894
Epoch 64/100
354/354 [==============================] - 0s 157us/sample - loss: 6.8729 - mae: 1.7492 - mse: 6.8729 - val_loss: 10.5465 - val_mae: 2.1532 - val_mse: 10.5465
Epoch 65/100
354/354 [==============================] - 0s 164us/sample - loss: 6.9345 - mae: 1.7837 - mse: 6.9345 - val_loss: 11.5379 - val_mae: 2.1963 - val_mse: 11.5379
Epoch 66/100
354/354 [==============================] - 0s 166us/sample - loss: 6.8218 - mae: 1.7714 - mse: 6.8218 - val_loss: 10.1486 - val_mae: 2.1617 - val_mse: 10.1486
Epoch 67/100
354/354 [==============================] - 0s 157us/sample - loss: 6.8711 - mae: 1.8045 - mse: 6.8711 - val_loss: 10.3196 - val_mae: 2.2297 - val_mse: 10.3196
Epoch 68/100
354/354 [==============================] - 0s 162us/sample - loss: 6.7281 - mae: 1.7762 - mse: 6.7281 - val_loss: 11.2361 - val_mae: 2.2046 - val_mse: 11.2361
Epoch 69/100
354/354 [==============================] - 0s 158us/sample - loss: 6.5518 - mae: 1.7292 - mse: 6.5518 - val_loss: 10.2378 - val_mae: 2.1494 - val_mse: 10.2378
Epoch 70/100
354/354 [==============================] - 0s 161us/sample - loss: 6.6489 - mae: 1.7383 - mse: 6.6489 - val_loss: 11.1613 - val_mae: 2.2212 - val_mse: 11.1613
Epoch 71/100
354/354 [==============================] - 0s 176us/sample - loss: 6.5827 - mae: 1.7564 - mse: 6.5827 - val_loss: 10.0177 - val_mae: 2.2440 - val_mse: 10.0177
Epoch 72/100
354/354 [==============================] - 0s 168us/sample - loss: 6.3411 - mae: 1.7463 - mse: 6.3411 - val_loss: 10.7929 - val_mae: 2.1946 - val_mse: 10.7929
Epoch 73/100
354/354 [==============================] - 0s 163us/sample - loss: 6.3621 - mae: 1.7466 - mse: 6.3621 - val_loss: 9.7344 - val_mae: 2.1441 - val_mse: 9.7344
Epoch 74/100
354/354 [==============================] - 0s 158us/sample - loss: 6.2298 - mae: 1.7411 - mse: 6.2298 - val_loss: 11.2495 - val_mae: 2.1948 - val_mse: 11.2495
Epoch 75/100
354/354 [==============================] - 0s 159us/sample - loss: 6.3037 - mae: 1.7169 - mse: 6.3037 - val_loss: 10.1339 - val_mae: 2.1716 - val_mse: 10.1339
Epoch 76/100
354/354 [==============================] - 0s 158us/sample - loss: 6.0780 - mae: 1.6686 - mse: 6.0780 - val_loss: 11.9975 - val_mae: 2.3317 - val_mse: 11.9975
Epoch 77/100
354/354 [==============================] - 0s 165us/sample - loss: 6.3311 - mae: 1.7082 - mse: 6.3311 - val_loss: 11.6433 - val_mae: 2.2756 - val_mse: 11.6433
Epoch 78/100
354/354 [==============================] - 0s 155us/sample - loss: 6.0620 - mae: 1.6765 - mse: 6.0620 - val_loss: 13.0159 - val_mae: 2.5073 - val_mse: 13.0159
Epoch 79/100
354/354 [==============================] - 0s 167us/sample - loss: 6.1819 - mae: 1.7157 - mse: 6.1819 - val_loss: 10.1000 - val_mae: 2.1462 - val_mse: 10.1000
Epoch 80/100
354/354 [==============================] - 0s 158us/sample - loss: 5.9085 - mae: 1.6720 - mse: 5.9085 - val_loss: 11.7867 - val_mae: 2.5045 - val_mse: 11.7866
Epoch 81/100
354/354 [==============================] - 0s 168us/sample - loss: 6.0201 - mae: 1.6678 - mse: 6.0201 - val_loss: 10.8789 - val_mae: 2.3031 - val_mse: 10.8789
Epoch 82/100
354/354 [==============================] - 0s 159us/sample - loss: 6.1278 - mae: 1.6799 - mse: 6.1278 - val_loss: 9.8114 - val_mae: 2.1048 - val_mse: 9.8114
Epoch 83/100
354/354 [==============================] - 0s 150us/sample - loss: 5.6372 - mae: 1.6280 - mse: 5.6372 - val_loss: 10.0971 - val_mae: 2.1464 - val_mse: 10.0971
Epoch 84/100
354/354 [==============================] - 0s 153us/sample - loss: 5.9587 - mae: 1.6421 - mse: 5.9587 - val_loss: 9.4731 - val_mae: 2.1915 - val_mse: 9.4731
Epoch 85/100
354/354 [==============================] - 0s 158us/sample - loss: 5.6189 - mae: 1.6223 - mse: 5.6189 - val_loss: 9.9788 - val_mae: 2.3332 - val_mse: 9.9788
Epoch 86/100
354/354 [==============================] - 0s 158us/sample - loss: 5.8193 - mae: 1.6930 - mse: 5.8193 - val_loss: 10.4070 - val_mae: 2.1490 - val_mse: 10.4070
Epoch 87/100
354/354 [==============================] - 0s 155us/sample - loss: 5.5919 - mae: 1.6152 - mse: 5.5919 - val_loss: 9.9985 - val_mae: 2.2546 - val_mse: 9.9985
Epoch 88/100
354/354 [==============================] - 0s 160us/sample - loss: 5.6652 - mae: 1.6246 - mse: 5.6652 - val_loss: 9.1506 - val_mae: 2.0642 - val_mse: 9.1506
Epoch 89/100
354/354 [==============================] - 0s 157us/sample - loss: 5.6349 - mae: 1.6108 - mse: 5.6349 - val_loss: 9.8522 - val_mae: 2.0813 - val_mse: 9.8522
Epoch 90/100
354/354 [==============================] - 0s 159us/sample - loss: 5.6165 - mae: 1.6449 - mse: 5.6165 - val_loss: 9.1553 - val_mae: 2.0421 - val_mse: 9.1553
Epoch 91/100
354/354 [==============================] - 0s 161us/sample - loss: 5.5416 - mae: 1.6153 - mse: 5.5416 - val_loss: 10.4231 - val_mae: 2.2880 - val_mse: 10.4231
Epoch 92/100
354/354 [==============================] - 0s 158us/sample - loss: 5.3909 - mae: 1.5863 - mse: 5.3909 - val_loss: 8.8087 - val_mae: 2.1022 - val_mse: 8.8087
Epoch 93/100
354/354 [==============================] - 0s 155us/sample - loss: 5.3540 - mae: 1.5986 - mse: 5.3540 - val_loss: 9.6963 - val_mae: 2.1931 - val_mse: 9.6963
Epoch 94/100
354/354 [==============================] - 0s 161us/sample - loss: 5.3198 - mae: 1.6074 - mse: 5.3198 - val_loss: 9.1875 - val_mae: 2.1917 - val_mse: 9.1875
Epoch 95/100
354/354 [==============================] - 0s 165us/sample - loss: 5.2299 - mae: 1.5638 - mse: 5.2299 - val_loss: 8.8746 - val_mae: 2.1273 - val_mse: 8.8746
Epoch 96/100
354/354 [==============================] - 0s 163us/sample - loss: 5.2789 - mae: 1.5651 - mse: 5.2789 - val_loss: 9.7351 - val_mae: 2.2359 - val_mse: 9.7351
Epoch 97/100
354/354 [==============================] - 0s 153us/sample - loss: 5.3399 - mae: 1.6002 - mse: 5.3399 - val_loss: 9.7185 - val_mae: 2.1080 - val_mse: 9.7185
Epoch 98/100
354/354 [==============================] - 0s 159us/sample - loss: 5.0072 - mae: 1.5055 - mse: 5.0072 - val_loss: 8.3621 - val_mae: 2.0586 - val_mse: 8.3621
Epoch 99/100
354/354 [==============================] - 0s 156us/sample - loss: 5.2596 - mae: 1.5557 - mse: 5.2596 - val_loss: 8.6406 - val_mae: 2.0527 - val_mse: 8.6406
Epoch 100/100
354/354 [==============================] - 0s 159us/sample - loss: 5.0983 - mae: 1.5543 - mse: 5.0983 - val_loss: 8.4836 - val_mae: 2.0234 - val_mse: 8.4836
%% Cell type:markdown id: tags:
## Step 6 - Evaluate
### 6.1 - Model evaluation
MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.
%% Cell type:code id: tags:
``` python
score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
```
%% Output
x_test / loss : 8.4836
x_test / mae : 2.0234
x_test / mse : 8.4836
%% Cell type:markdown id: tags:
### 6.2 - Training history
What was the best result during our training ?
%% Cell type:code id: tags:
``` python
df=pd.DataFrame(data=history.history)
df.describe()
```
%% Output
loss mae mse val_loss val_mae val_mse
count 100.000000 100.000000 100.000000 100.000000 100.000000 100.000000
mean 15.144930 2.312168 15.144930 17.019036 2.582618 17.019036
std 43.707091 1.906713 43.707090 26.587745 1.288267 26.587746
min 5.007155 1.505515 5.007155 8.362053 2.023406 8.362053
25% 6.285225 1.716563 6.285225 10.419040 2.192718 10.419040
50% 8.037316 1.922454 8.037317 12.488579 2.301342 12.488580
75% 10.482029 2.189933 10.482029 14.470699 2.503943 14.470701
max 414.560260 18.257650 414.560242 266.372801 13.991282 266.372803
%% Cell type:code id: tags:
``` python
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
```
%% Output
min( val_mae ) : 2.0234
%% Cell type:code id: tags:
``` python
ooo.plot_history(history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']})
```
%% Output
%% Cell type:markdown id: tags:
## Step 7 - Make a prediction
%% Cell type:code id: tags:
``` python
my_data = [ 1.26425925, -0.48522739, 1.0436489 , -0.23112788, 1.37120745,
-2.14308942, 1.13489104, -1.06802005, 1.71189006, 1.57042287,
0.77859951, 0.14769795, 2.7585581 ]
real_price = 10.4
my_data=np.array(my_data).reshape(1,13)
```
%% Cell type:code id: tags:
``` python
predictions = model.predict( my_data )
print("Prédiction : {:.2f} K$".format(predictions[0][0]))
print("Reality : {:.2f} K$".format(real_price))
```
%% Output
Prédiction : 11.59 K$
Reality : 10.40 K$
%% Cell type:markdown id: tags:
---
![](../fidle/img/00-Fidle-logo-01_s.png)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment