After following the course a student is able to:
- Analyze stability of differential equations using methods of Lyapunov and LaSalle
- Explain if a given problem can be cast as a calculus of variations problem or as an optimal control problem
- Solve Euler-Lagrange equations with and without final constraints and analyze second order properties
- Formulate optimal control problems using Pontryagin’s minimum principle and solve the resulting Hamiltonian equations for simple enough problems
- Formulate optimal control problems as dynamic programming problems and solve the resulting Bellman equation for simple enough problems
- Analyse LQ optimal control problems, including finite and infinite horizon, and determine resulting Riccati equations and indicate whether or not the Riccati equation has a solution, and in simple enough problems solve the Riccati equation by hand
|
|
The purpose of this course is to familiarize the students with the principles of stability of dynamical systems and optimal control and the calculus of variations. Stability is an important issue in many applications. Loosely speaking, we call the solution of a differential equation stable if a small change of the initial condition results in a small change in the solution. Two approaches will be treated. The first is based on linearization. The second method is based on Lyapunov functions. The calculus of variations is concerned with the problem where the optimal value of a criterion that depends on an infinite dimensional quantity, such as a function, has to be found. A well-known example of such a problem is the optimal shape of a rope that is hanging from two points at a ceiling. Optimal control is concerned with the problems like steering a system from a given state to a desired state with minimal cost. The cost can be fuel consumption, total work, money or time. We cover two approaches to optimal control. One is based on Pontryagin’s minimum principle. This is a generalization of the Lagrange multiplier method for optimization of a function, subject to a constraint. The minimum principle is derived using the calculus of variations. The other method is based on dynamic programming. This is a recursive algorithm that starts at the final state of the system and solved backwards in time until the initial state is reached. The optimal control problem for linear systems with quadratic cost criterion also known as the LQ problem is treated in depth. Throughout the course motivating examples based on physical and economic problems will be given
|
 |
|