CH5024 Numerical Optimal Control Theory


The course will introduce numerical methods to solve infinite and finite horizon optimal control problems. Learning Outcome: Students will  learn a. to formulate of optimal control problems under different scenarios b. to solve optimal control problem arising in different applications using numerical techniques c. to formulate and solve model predictive control problem

Course Contents:

  1. Review of state-space representation of systems
  2. Introduction to Optimization- Unconstrained and constrained optimization, KKT conditions
  3. Numerical methods to solve ODE and DAE systems
  4. Optimal control problem formulations- Calculus of variational approach to the fixed-time, free-endpoint problem
  5. Introducing Pontryagin maximum principle (PMP)  Hamiltonian-Jacobi Bellman (HJB) equation-principle of optimality, Linear quadratic control and Riccati equation
  6. Direct methods: (i) Simultaneous method, (ii) direct sequential method, (iii) multiple shooting method
  7. Indirect method: Two-points boundary value problem (TPBVP)
  8. Dynamic Programming
  9. Online Control: Moving horizon estimation, Introduction to  linear and nonlinear model predictive control (MPC),  Parametric MPC

Text Books:

  1. Pinch, Enid R. “Optimal Control and the Calculus of Variations”, Oxford University Press, 1995
  2. Diehl, M. and Gro, S. “Numerical Optimal Controls”,
  3. Donald E. Kirk, “Optimal Control Theory: An Introduction”, Prentice-Hall Publisher, 1998.

Reference Books:

  1.  Mike Mesterton- Gibbons, “A Primer on The Calculus of Variations and Optimal Control        Theory”– American Mathematical Society, First Indian Edition 2012
  2. Daniel Liberzon, “Calculus of Variations and Optimal Control Theory – A concise introduction”, Princeton University Press, 2012