Commit 50c06220 authored by Florent Chatelain's avatar Florent Chatelain
Browse files

fix typo

parent 58583b35
......@@ -428,9 +428,16 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.3"
"version": "3.7.9"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}
......@@ -328,7 +328,14 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.3"
"version": "3.7.9"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
......
......@@ -3,11 +3,12 @@
This notebook can be run on mybinder: [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/git/https%3A%2F%2Fgricad-gitlab.univ-grenoble-alpes.fr%2Fchatelaf%2Fparcours-numerique-ia/master?filepath=notebooks%2F/9_RNN_LSTM/N1_LSTM_example.ipynb)
%% Cell type:markdown id: tags:
Given the computational load, an efficient alternative is to use the UGA's jupyterhub service https://jupyterhub.u-ga.fr/ . In this case, to install tensorflow 2.X, just type
!pip install --user --upgrade tensorflow
!pip install --user --upgrade tensorflow
in a code cell, then restart the notebook (or just restart the kernel)
%% Cell type:markdown id: tags:
LSTM example on forecasting a scalar temperature value from a vector of past values.
......
......@@ -304,11 +304,11 @@
%% Cell type:markdown id: tags:
### Exercice
- Based on the logistic regression formula to compute the probability of each class (with, or without CHD) and the values of the estimated weights, what would be the increase factor on the odds probability $$\frac{ \Pr(\textrm{"CHD"} |X=x)}{\Pr( \textrm{"no CHD"}|X=x)}$$ when the tobacco consumption increases of 1 (in standardized unit)?
- Check this by adding an offset to the tobbaco variable in the cell above and comparing the obtained odds
- Check this by adding an offset to the tobacco variable in the cell above and comparing the obtained odds
%% Cell type:markdown id: tags:
## Greedy variable selection procedure
......@@ -368,11 +368,11 @@
Note also that there exists an other popular method in the statistical literature called *Forward stepwise selection*. The selection step used differs from the one used in OMP in that it selects the variable that will lead to the minimum residual error *after* orthogonalisation. See for instance the following paper for a comparison between both methods
> Blumensath, Thomas, and Mike E. Davies. ["On the difference between orthogonal matching pursuit and orthogonal least squares."](https://eprints.soton.ac.uk/142469/1/BDOMPvsOLS07.pdf) (2007).
This principle can be extented to generalized linear model such that Logistic Regression by replacing the residual sum of squares criterion by the opposite of the log likelihood.
Within scikit-learn, only OMP is available for linear regression model with [`linear_model.OrthogonalMatchingPursuit`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.OrthogonalMatchingPursuit.html). However this can be directly apply to our binary classification problem
Within scikit-learn, only OMP is available for linear regression model with [`linear_model.OrthogonalMatchingPursuit`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.OrthogonalMatchingPursuit.html). However this can be directly applied to our binary classification problem
%% Cell type:code id: tags:
``` python
# We use Forward selection, aka Orthogonal Matching Pursuit
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment