Skip to content

Latest commit

 

History

History
205 lines (161 loc) · 12.1 KB

File metadata and controls

205 lines (161 loc) · 12.1 KB

Module Goals

M01 : Motivating Applications, Machine Learning Pipeline (Data, Models, Loss, Optimization), Backpropagation

Goals

  • Understand the key components to set up a classification task
  • Relate business problems to machine learning methods
  • Understand how chain rule works
  • Understand why multiclass logistic regression may not work well even for 2D data

M02 : Feedforward Networks: Nonlinearities, Convolutional Neural Networks: Convolution, Pooling

Goals

  • Get acquainted with the basics of Python
  • Understand the notion of hidden layers and nonlinearities
  • Convolution layer as collection of filters applied to input tensors
  • Why pooling helps in reducing parameters downstream

M03 : Jumpstarting Convolutional Neural Networks: Visualization, Transfer, Practical Models (VGG, ResNet)

Goals

  • Understand how to transfer parameters previously learned for a new task
  • Know the different ways to debug a deep network
  • Be aware of the different engineering tricks such as dropout, batch normalization
  • Learn why image datasets can be enhanced using data augmentation
  • Understand parameter-efficient fine-tuning techniques (LoRA, adapters) for pretrained models

M04 : Text and Embeddings: Introduction to NLP, Word Embeddings, Word2Vec

Goals

  • Understand how natural language elements (such as words) are processed in an analytics workflow
  • Understand the shortcomings of methods such as Naive Bayes, Latent Dirichlet Allocation
  • Realize that a CNN can also be used for a NLP task (sentence classification/sentiment analysis)
  • What is word2vec and how does it help in NLP tasks?

M05 : Recurrent Neural Networks and Transformers: Sequence to Sequence Learning, RNNs and LSTMs

Goals

  • Know when prediction tasks can have sequential dependencies
  • The RNN architecture and unfolding
  • Know how LSTMs work
  • Applications of 'sequential to sequential' models

M06 : Advanced NLP: Attention, BERT and Transformers, LLMs, VLMs, MLLMs

Goals

  • Be able to explain self-attention and how it differs from simpler attention mechanisms seen in sequence to sequence models
  • Be able to reason about keys, values and queries in self-attention
  • Be able to recall the key characteristics of BERT and how pre-trained models can be used for NLP tasks.
  • Understand the architecture and training paradigm of Large Language Models (LLMs)
  • Know the basics of LLM fine-tuning using parameter-efficient methods (LoRA, PEFT)
  • Be aware of vision-language models (VLMs) and multimodal LLMs (MLLMs)

M07 : Unsupervised Deep Learning: Variational Autoencoders, Diffusion Models, Generative Adversarial Networks

Goals

  • Meaning of generative modeling
  • What are variational autoencoders (VAEs) and where can they be used?
  • The intuition behind generative adversarial networks (GANs)
  • Differences between GANs and VAEs

M08 : Online Learning: A/B Testing, Multi-armed Bandits, Contextual Bandits

Goals

  • What is online learning? How is it different from supervised learning?
  • Relation between forecasting and decision making
  • The multi armed bandit problem and solutions
  • Contextual bandits

M09 : Reinforcement Learning: Policies, State-Action Value Functions, Bellman Equations, Q Learning

Goals

  • What is reinforcement learning?
  • Basics of Markov Decision Processes
  • Policies, Value functions and how to think about these two objects
  • Be able to understand the difference between Bellman Expectation Equation and Bellman Optimality Equation
  • Intuitive reasoning for the Q-Learning update rule
  • Be able to identify relationships between state value functions, state-action value functions and policies

M10 : Deep Reinforcement Learning: Function Approximation, DQN for Atari Games, DQN for Atari Games, MCTS for AlphaGo

Goals

  • Know the role of function approximation in Q-learning
  • Be able to understand the key innovations in the DQN model
  • Identify the differences between Monte Carlo tree search vs Monte Carlo rollouts
  • Be able to identify key compoments of the AlphaGo (and variants such as AlphaZero) Go playing agent

M11 : AI Ethics, Fairness, Accountability, Transparency and Sustainability

Goals

  • Understand the key principles of responsible AI: fairness, accountability, transparency, and ethics
  • Be able to identify sources of bias in ML pipelines (data, model, deployment)
  • Know how to use fairness metrics and tools to evaluate and mitigate bias in models
  • Understand the importance of model interpretability and explainability for stakeholder trust
  • Be aware of the environmental impact of training large models and strategies for sustainable AI
  • Familiarize with regulatory frameworks (EU AI Act, NIST) governing AI deployment