Objectives of the training
The objective of this training course is to understand the evolution of Machine Learning towards Deep Learning, to master the main neural network architectures and to grasp the theoretical, practical and methodological foundations for designing, training and evaluating these advanced models.Prerequisite
Have a basic understanding of programming and be proficient in IT and statistical tools. A basic understanding of machine learning is recommended.Trainers
Course architecture
Introduction to AI, Machine Learning, and Deep Learning
• The history, basic concepts, and applications of artificial intelligence are far removed from the fantasies surrounding the field.
• Collective intelligence: Aggregate knowledge shared by numerous virtual agents.
• Genetic algorithm: Developing a population of virtual agents through selection.
• Normal machine learning: meaning.
• Task type: Supervised learning, unsupervised learning, reinforcement learning.
• Action type: Classification, regression, clustering, density estimation, dimensionality reduction.
• Examples of machine learning algorithms: Linear regression, naive Bayes, random forests.
• Machine learning and deep learning: Why is ML still at the forefront (random forests and XGBoost)?
Fundamental concepts of a neural network
• Basic mathematics refresher.
• Neural network: Architectures, activation functions, and previous activation weights...
• Training a neural network: Cost functions, backpropagation, stochastic gradient descent...
• Modeling a neural network: Modeling input and output data according to the type of problem.
• Understanding functions with neural networks.
• Understanding distributions with neural networks.
• Data growth: How to balance the dataset?
• Generalization of neural network results.
• Initialization and regularization of the neural network: L1/L2 regularization, batch normalization.
• Optimization and convergence algorithms.
• Demonstration: Adjustment functions and distributions using neural networks.
Common Machine Learning and Deep Learning tools
• Data management tools: Apache Spark, Apache Hadoop.
• Common Machine Learning tools: Numpy, Scipy, Sci-kit.
• High-level DL frameworks: PyTorch, Keras, Lasagne.
• Low-level DL frameworks: Theano, Torch, Caffe, Tensorflow. Demonstration
• Applications and limitations of the tools presented.
Convolutional Neural Networks (CNN)
• Fundamental principles and applications.
• Basic functioning of a CNN: convolutional layer, use of a kernel, padding and stride, etc.
• CNN architectures that have advanced the state of the art in image classification: LeNet, VGG Networks, Network in Network, etc.
• Use of an attention model.
• Application to a typical classification scenario (text or image).
• CNNs for generation: super-resolution, pixel-by-pixel segmentation.
• Main strategies for increasing feature maps for image generation.
Case study: Innovations brought about by each CNN architecture and their more global applications (1x1 convolution or residual connections).
Recurrent Neural Networks (RNN)
• Introduction to RNNs: fundamental principles and applications.
• Basic characteristics of RNNs: hidden activations, backpropagation in time, unfolded versions.
• Development for GRU (Gated Recurrent Units) and LSTM (Long Short Term Memory).
• Convergence problems and gradient leakage.
• Types of classic architecture: time series prediction, classification, etc. RNN encoder-decoder architecture. Use of feedforward models.
• NLP applications: word/character encoding, translation.
• NLP application: prediction of the next image generated from a video sequence.
Demonstration: Different states and developments brought about by Gated Recurrent Units
and Long Short Term Memory architectures.
Generative models: VAE and GAN
• Presentation of Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN) generative models.
• Autoencoder: dimensionality reduction and limited generation.
• Variational AutoEncoder: generative model and approximation of data distribution.
• Definition and use of latent space. Reparameterization trick.
• Fundamentals of Generative Adversarial Networks. Convergence of a GAN and difficulties encountered.
• Fundamentals of Generative Adversarial Networks. Convergence of a GAN and difficulties encountered.
• Improved convergence: Wasserstein GAN, BeGAN. Earth Moving Distance.
• Applications for image or photograph generation, text generation, super resolution.
Demonstration: Applications of generative models and use of latent space.
Deep Reinforcement Learning
• Reinforcement Learning.
• Use of a neural network to understand the state function.
• Deep Q Learning: experience replay and application to video game control.
• Learning policy optimizations. On-policy and off-policy. Actor critic architecture. A3C.
• Applications: control of a simple video game or digital system.
Demonstration: Control of an agent in an environment defined by a state and possible actions.
Pedagogical details
Type of training
Private or personalized training
If you have more than 8 people to sign up for a particular course, it can be delivered as a private session right at your offices. Contact us for more details.
Request a quotePrivate or personalized training
If you have more than 8 people to sign up for a particular course, it can be delivered as a private session right at your offices. Contact us for more details.
Request a quote