Commit d3b88a0c authored by Florent Chatelain's avatar Florent Chatelain
Browse files

update lab statements

parent 09f7c33f
# Lab 2 statement
# Discrimant Analysis
The objective of this lab is to illustrate, in a first part, discriminant analysis and naïve Bayes methods on both synthetic and computer vision (handwritten digits) datasets. The second part illustrates principal component analysis on the olympic and iris datasets.
The objective of this lab is to illustrate discriminant analysis and naïve Bayes methods on both synthetic and computer vision (handwritten digits) datasets.
_Note: For each notebook, read the cells and run the code, then follow the instructions/questions in the `Exercise` cells._
## LDA/QDA
See the notebooks in the [4_discriminant_analysis](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/4_discriminant_analysis/) folder:
......@@ -20,6 +19,8 @@ The following notebooks are _optional_:
<!--
## Part II (PCA)
illustrates principal component analysis on the olympic and iris datasets.
See the notebooks in the [5_principal_component_analysis](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/5_principal_component_analysis/) folder:
1. apply and interpret PCA on the olympic decathlon dataset, [`N1_pca_olympic_data.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/5_principal_component_analysis/N1_pca_olympic_data.ipynb)
......
# Lab 3 statement
# Principal Component Analysis
The objective of this lab is to illustrate regression models, in particular
stochastic gradient descent and ridge (L2) regularization.
The objective of this lab is to illustrate principal component analysis on the olympic and iris datasets.
_Note: For each notebook, read the cells and run the code, then follow the instructions/questions in the `Questions` or `Exercise` cells._
See the notebooks in the [5_principal_component_analysis](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/5_principal_component_analysis/) folder:
## Part I: Stochastic gradient descent and ridge (L2) regularization
See the notebooks in the [6_linear_models_ridge](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/) folder:
1. Experiment the learning rate parameter for stochastic gradient descent [`N1_learning_rate_SGD.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/N1_learning_rate_SGD.ipynb)
2. Experiment the effect of ridge (L2) regularization [`N2_L2_regularization.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/N2_L2_regularization.ipynb)
3. Apply ridge (L2) regularization for the deconvolution problem [`N3_deconvolution_ridge.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/N3_deconvolution_ridge.ipynb)
4. Perform ridge regression for predicting high dimensional NIR biscuits data [`N4_ridge_NIR_biscuits.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/N4_ridge_NIR_biscuits.ipynb)
*Note: The two first notebooks are simple interactive demonstrations on the tensorflow playground similar to those made together during the class session. They can be skipped during the labwork session*
## Part II: Lasso (L1) regularization
See the notebooks in the [6bis_linear_models_lasso_logistic](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/) folder:
1. Experiment the effect of lasso (L1) regularization [`N1_L1_regularization.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N1_L1_regularization.ipynb)
2. Perform Lasso penalized Logistic Regression, and greedy variable selection procedures, to model the risk of coronary heart disease based on clinical data [`N2_LR_heart_diseases_SA.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N2_LR_heart_diseases_SA.ipynb)
3. Perform Lasso regression for a high-dimensional sparse model based on the Advertising data set
[`N3_lasso_curse_dimensionality.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N3_lasso_curse_dimensionality.ipynb)
*Note: The first notebook is an interactive demonstration on the tensorflow playground similar to the one made together during the class session. This can be skipped during the labwork session*
1. apply and interpret PCA on the olympic decathlon dataset, [`N1_pca_olympic_data.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/5_principal_component_analysis/N1_pca_olympic_data.ipynb)
2. compare PCA with supervised linear discriminant analysis (LDA) on the Iris data set to reduce the dimension, [`N2_pca_versus_lda.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/5_principal_component_analysis/N2_pca_versus_lda.ipynb)
<!-- The following notebooks is _optional_:-->
3. experiment how a dimension reduction method like PCA can effectively improve the performance of a (SVM) classifier, [`N3_svm_face_recognition.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/5_principal_component_analysis/N3_svm_face_recognition.ipynb)
# Lab 4 statement
# Linear models: Stochastic gradient descent and ridge (L2) regularization
The objective of this lab is to illustrate support vector machine (part I) and clustering methods (part II)
The objective of this lab is to illustrate regression models, in particular
stochastic gradient descent and ridge (L2) regularization.
_Note: For each notebook, read the cells and run the code, then follow the instructions/questions in the `Questions` or `Exercise` cells._
See the notebooks in the [6_linear_models_ridge](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/) folder:
## Part I
This part illustrates SVM for both linear and non-linear classification tasks. This also introduces
one-class SVM, an extension of the SVM principle to the novelty detection for unuspervised learning.
1. Experiment the learning rate parameter for stochastic gradient descent [`N1_learning_rate_SGD.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/N1_learning_rate_SGD.ipynb)
2. Experiment the effect of ridge (L2) regularization [`N2_L2_regularization.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/N2_L2_regularization.ipynb)
3. Apply ridge (L2) regularization for the deconvolution problem [`N3_deconvolution_ridge.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/N3_deconvolution_ridge.ipynb)
4. Perform ridge regression for predicting high dimensional NIR biscuits data [`N4_ridge_NIR_biscuits.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6_linear_models_ridge/N4_ridge_NIR_biscuits.ipynb)
*Note: The two first notebooks are simple interactive demonstrations on the tensorflow playground similar to those made together during the class session.* <!--*They can be skipped during the labwork session*-->
See the notebooks in the [7_support_vector_machine ](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/7_support_vector_machine/) folder:
## Part II: Lasso (L1) regularization
See the notebooks in the [6bis_linear_models_lasso_logistic](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/) folder:
1. Run and plot the maximum margin separating hyperplane within a two-class separable or not dataset and interpret the role of the kernel and SVM parameters [`N1_plot_svm.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/7_support_vector_machine/N1_plot_svm.ipynb).
2. Perform SVM classification for some pairs of (zip code) digits that are the hardest to discriminate using a linear discriminant analysis method [`N2_svm_zip_digits.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/7_support_vector_machine/N2_svm_zip_digits.ipynb).
1. Experiment the effect of lasso (L1) regularization [`N1_L1_regularization.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N1_L1_regularization.ipynb)
2. Perform Lasso penalized Logistic Regression, and greedy variable selection procedures, to model the risk of coronary heart disease based on clinical data [`N2_LR_heart_diseases_SA.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N2_LR_heart_diseases_SA.ipynb)
The next notebooks are **optional**. But if you want to go deeper, just take a look at it during or after the class session
3. Perform Lasso regression for a high-dimensional sparse model based on the Advertising data set
[`N3_lasso_curse_dimensionality.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N3_lasso_curse_dimensionality.ipynb)
3. *Optional*: Train and plot one-class SVM algorithm to classify new data as *similar* or *different* to the training set on a toy problem
[`N3_oneclasssvm_novelty_detection.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/7_support_vector_machine/N3_oneclasssvm_novelty_detection.ipynb).
4. *Optional*: Experiment the importance of scaling on a toy example for both linear and non-linear SVMs [`N4_importance_of_scaling-svm.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/7_support_vector_machine/N4_importance_of_scaling-svm.ipynb)
<!--
5. *Optional*: Apply a simple dimension reduction, here PCA, to improve the performance while reducing the computational burden of the SVM classifier on real faces data [`N5_svm_face_recognition.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/7_support_vector_machine/N5_svm_face_recognition.ipynb).
-->
## Part II
We illustrate here basic concepts of clustering.
See the notebooks in the [`8_Clustering`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/) folder
1. Implement your own version of Kmeans in 1D for Euclidean distance [`N1_Kmeans_basic.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N1_Kmeans_basic.ipynb)
and compare the obtained results with Sklearn Kmeans implementation.
2. Apply Kmeans algorithm (Sklearn implementation) to the classical Iris dataset [`N2_KMeans_iris_data_example.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N2_KMeans_iris_data_example.ipynb)
3. Implement a Kernelized version of Kmeans, and test the importance of adequate parametrization or choice of initial conditions
[`N3_Kernel_Kmeans_example.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N3_Kernel_Kmeans_example.ipynb)
4. Implement your own version of EM for Gaussian model and apply it to the same example used for Kmeans in a preceeding notebook. Compare with KMeans and interpret the results [`N4_EM_basic.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N4_EM_basic.ipynb)
5. Example of EM application on the Iris data set [`N5_EM_iris_data_example.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N5_EM_iris_data_example.ipynb)
*Note: The first notebook is an interactive demonstration on the tensorflow playground similar to the one made together during the class session. This can be skipped during the labwork session*
# Linear models: Lasso (L1) regularization
See the notebooks in the [6bis_linear_models_lasso_logistic](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/) folder:
1. Experiment the effect of lasso (L1) regularization [`N1_L1_regularization.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N1_L1_regularization.ipynb)
2. Perform Lasso penalized Logistic Regression, and greedy variable selection procedures, to model the risk of coronary heart disease based on clinical data [`N2_LR_heart_diseases_SA.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N2_LR_heart_diseases_SA.ipynb)
3. Perform Lasso regression for a high-dimensional sparse model based on the Advertising data set
[`N3_lasso_curse_dimensionality.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/6bis_linear_models_lasso_logistic/N3_lasso_curse_dimensionality.ipynb)
*Note: The first notebook is an interactive demonstration on the tensorflow playground similar to the one made together during the class session.*<!-- *This can be skipped during the labwork session*-->
# Clustering
The objective of this lab is to illustratebasic concepts of clustering.
See the notebooks in the [`8_Clustering`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/) folder
1. Implement your own version of Kmeans in 1D for Euclidean distance [`N1_Kmeans_basic.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N1_Kmeans_basic.ipynb)
and compare the obtained results with Sklearn Kmeans implementation.
2. Apply Kmeans algorithm (Sklearn implementation) to the classical Iris dataset [`N2_KMeans_iris_data_example.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N2_KMeans_iris_data_example.ipynb)
3. Implement a Kernelized version of Kmeans, and test the importance of adequate parametrization or choice of initial conditions
[`N3_Kernel_Kmeans_example.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N3_Kernel_Kmeans_example.ipynb)
4. Implement your own version of EM for Gaussian model and apply it to the same example used for Kmeans in a preceeding notebook. Compare with KMeans and interpret the results [`N4_EM_basic.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N4_EM_basic.ipynb)
5. Example of EM application on the Iris data set [`N5_EM_iris_data_example.ipynb`](https://gricad-gitlab.univ-grenoble-alpes.fr/ai-courses/autonomous_systems_ml/-/blob/master/notebooks/8_Clustering/N5_EM_iris_data_example.ipynb)
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment