EE-568 / 6 credits

Teacher: Cevher Volkan

Language: English


Summary

This course describes theory and methods for Reinforcement Learning (RL), which revolves around decision making under uncertainty. The course covers classic algorithms in RL as well as recent algorithms under the lens of contemporary optimization.

Content

Keywords

Reinforcement Learning (RL)
Markov Decision Process (MDP)
Dynamic Programming
Linear Programming
Policy Gradients
Deep Reinforcement Learning (Deep RL)
Imitation Learning
Markov Games
Robust Reinforcement Learning
RL Algorithms (e.g., Q-Learning, SARSA, TRPO, PPO)
Offline Reinforcement Learning
Behavior Cloning
Inverse Reinforcement Learning
Equilibria
Robustness

 

Learning Prerequisites

Required courses

Previous coursework in optimization, calculus, linear algebra, and probability is required. Familiarity with optimization is useful. Familiarity with python, and basic knowledge of pytorch deep learning framework is needed.

Recommended courses

EE-556 Mathematics of Data: From Theory to Computation

Important concepts to start the course

Familiarity with optimization algorithms, linear programming and convex duality.

Learning Outcomes

By the end of the course, the student must be able to:

  • Define the key features of RL that distinguishes it from standard machine learning.
  • Assess / Evaluate strengths, limitations and theoretical properties of RL algorithms.
  • Recognize the common, connecting boundary of optimization and RL.
  • Formulate and solve sequential decision-making problems by applying relevant RL tools.

Teaching methods

Lectures are comlemented with Jupiter notebook exercises along with a hands-on group project.

Assessment methods

The students are required to solve Jupiter notebook homeworks. They will work in a group to complete a project on the course and present a poster on the project at the end of the semester.

 

In the programs

  • Semester: Spring
  • Exam form: Oral (summer session)
  • Subject examined: Reinforcement learning
  • Lecture: 2 Hour(s) per week x 14 weeks
  • Exercises: 2 Hour(s) per week x 14 weeks
  • Project: 2 Hour(s) per week x 14 weeks
  • Semester: Spring
  • Exam form: Oral (summer session)
  • Subject examined: Reinforcement learning
  • Lecture: 2 Hour(s) per week x 14 weeks
  • Exercises: 2 Hour(s) per week x 14 weeks
  • Project: 2 Hour(s) per week x 14 weeks
  • Semester: Spring
  • Exam form: Oral (summer session)
  • Subject examined: Reinforcement learning
  • Lecture: 2 Hour(s) per week x 14 weeks
  • Exercises: 2 Hour(s) per week x 14 weeks
  • Project: 2 Hour(s) per week x 14 weeks
  • Semester: Spring
  • Exam form: Oral (summer session)
  • Subject examined: Reinforcement learning
  • Lecture: 2 Hour(s) per week x 14 weeks
  • Exercises: 2 Hour(s) per week x 14 weeks
  • Project: 2 Hour(s) per week x 14 weeks
  • Semester: Spring
  • Exam form: Oral (summer session)
  • Subject examined: Reinforcement learning
  • Lecture: 2 Hour(s) per week x 14 weeks
  • Exercises: 2 Hour(s) per week x 14 weeks
  • Project: 2 Hour(s) per week x 14 weeks
  • Semester: Spring
  • Exam form: Oral (summer session)
  • Subject examined: Reinforcement learning
  • Lecture: 2 Hour(s) per week x 14 weeks
  • Exercises: 2 Hour(s) per week x 14 weeks
  • Project: 2 Hour(s) per week x 14 weeks

Reference week

 MoTuWeThFr
8-9     
9-10     
10-11     
11-12     
12-13     
13-14   GCC330 
14-15    
15-16   GCC330 
16-17    
17-18     
18-19     
19-20     
20-21     
21-22     

Thursday, 13h - 15h: Lecture GCC330

Thursday, 15h - 17h: Exercise, TP GCC330

Related courses

Results from graphsearch.epfl.ch.