This repository is no longer maintained and has moved to plmlab.math.cnrs.fr/wikistat/High-Dimensional-Deep-Learning.
The main theme of the course is learning methods, especially deep neural networks, for processing high dimensional data, such as signals or images. We will cover the following topics:
-
Neural networks and introduction to deep learning: definition of neural networks, activation functions, multilayer perceptron, backpropagation algorithms, optimization algorithms, regularization.
Application : Implementation of a mlp with one layer withNumpy
-
Convolutional neural networks: convolutional layer, pooling, dropout, convolutional network architectures (ResNet, Inception), transfer learning and fine tuning, applications for image or signal classification, applications for objects localization and detection.
Application 1 : Image classification on MNIST and CatsVsDogs data withTensorflow
Application 2 : Objects localization and detection through CNNs -
Encoder-decoder, Variational auto-encoder, Generative adversarial networks
-
Functional decomposition on splines, Fourier or wavelets bases: cubic splines, penalized least squares criterion, Fourier basis, wavelet bases, applications to nonparametric regression, linear estimators and nonlinear estimators by thresholding, links with the LASSO method.
-
Anomaly detection for functional data: One Class SVM, Random Forest, Isolation Forest, Local Outlier Factor. Applications to anomaly detection in functional data.
-
Recurrent Neural Networks
Application : Sentiment analysis through Recurrent Neural Networks
The file environment.yml
contains a list of all the packages you need to run the notebooks in this repository. To install them, run the following command in your terminal:
conda env create -f environment.yml
Then, activate the environment:
conda activate hddl
or
source activate hddl
-
Lectures : 9 H .
-
Practical works : 28 H applications on real data sets with Python's libraries Scikit Learn and Keras -Tensorflow.
-
written exam (50 %) -
-
project (oral presentation 25% + notebook (25%)
The main objective of this project is to apply the knowledge you acquired during this course by:- Selecting a deep learning algorithm you haven't seen in this course.
- Explaining how this algorithm works both in a notebook and an oral presentation. The notebook must explain in details how the method principles and the experimental procedure
- Apply this algorithm on a different dataset and discuss on the obtained results (notebook and oral presentation).
You can choose a deep learning algorithm among the following list.
This list is not exhaustive and you can suggest other algorithms (that's actually a good idea).
Also, the code proposed on those examples are not necessarily the official code nor the one proposed by the authors.
Please register in the following document
Example of algorithms
-
Detection & segmentation
-
One shot learning
-
Style Transfer
-
Generative model
-
Unsupervised learning:
- Supervized contrastive learning paper, code
- Bootstrap your own latent: A new approach to self-supervised Learning paper, code
- A Simple Framework for Contrastive Learning of Visual Representations paper, code
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction paper, code
- Exploring Simple Siamese Representation Learning paper, code
- Unsupervised Representation Learning by Predicting Image Rotations paper, code
- Self-supervised Label Augmentation via Input Transformations paper, code
- A Simple Framework for Contrastive Learning of Visual Representations paper, code
-
Fairness
-
Domain adaptation/generalisation:
-
Regularization:
-
Time series