ENG-639 / 1 crédit

Enseignant: Summers Tyler

Langue: Anglais

Remark: Next time : Fall 2024


Summary

This course provides an introduction to stochastic optimal control and dynamic programming (DP), with a variety of engineering applications. The course focuses on the DP principle of optimality, and its utility in deriving and approximating solutions to an optimal control problem.

Content

Day 1
Lecture 1: Intro and Course Outline
Lecture 2: Mathematical Modeling Framework (Markov Decision Processes)
Lecture 3: The Principle of Optimality and Dynamic Programming (DP)
Day 2-3
Finite Space Systems
Lecture 4: Markov Chains
Lecture 5: DP for finite space Markov Decision Processes
Lecture 6: Coding DP (inventory and river flow examples)
Lecture 7: Infinite Horizon Problems, Value Iteration, Policy Iteration
Day 4-5
Continuous Space Systems and Linear Quadric Problems (LQ)
Lecture 8: Dynamic Programming in continuous space problems, curse of dimensionality, limitations
Lecture 9: DP in LQ problems
Lecture 10: LQ variations, time-varying, infinite horizon, multiplicative noise, dynamic games, etc.
Day 5-6
Approximate DP and Reinforcement Learning (RL) and Advanced Topics
Lecture 11: Approximate Dynamic Programming I (touch on MPC, RL)
Lecture 12: Approximate Dynamic Programming II (touch on MPC, RL)
Lecture 13: Supply Chain Example, Project Description
Lecture 14: Imperfect State Information (time permitting)

Learning Prerequisites

Recommended courses

linear algebra, probability theory, optimization, sufficient mathematical maturity

Dans les plans d'études

  • Forme de l'examen: Rapport de TP (session libre)
  • Matière examinée: Dynamic programming and optimal control
  • Cours: 15 Heure(s)
  • Type: optionnel

Semaine de référence

Cours connexes

Résultats de graphsearch.epfl.ch.