Continuous optimization is the branch of optimization where we optimize a (differentiable) function over continuous (as opposed to discrete) variables. Here the variables can be constrained by (differentiable) equality and inequality constraints as well as by convex cone constraints. Optimization problems like this occur naturally and commonly in science and engineering and also occur as relaxations of discrete optimization problems. Differentiability of the functions defining the problems allows for the use of multivariable calculus and linear algebra techniques to study the problems and to design and analyze efficient algorithms.
This course aims to provide a concise introduction into the basics of continuous unconstrained, constrained and conic optimization.
Learning goals
The student will be able to:
- Prove results on (convex) optimization problems.
- Solve the KKT conditions for basic constrained optimization problems.
- Be able to formulate the Lagrange dual, and understand/prove basic results on these problems.
- Give both sufficient and necessary optimality conditions for constrained continuous optimization problems.
- Use a range of techniques to solve both unconstrained and constrained continuous optimization problems, and prove results on these techniques.
- Formulate and recognize conic optimization problems, along with being able to construct their dual problems.
|
 |
|
In continuous optimization the variables take on continuous (as opposed to discrete) values, and the objective and constraints are typically differentiable. This allows for the use of (multivariable) calculus techniques to study the problems and their solutions, and to design and analyze efficient algorithms for finding solutions. In this course we study the theory, algorithms, and applications of continuous optimization. In the theory part we discuss Lagrangian duality, optimality conditions, convexity, and conic programming. In the algorithmic part we discuss first order optimization methods, neural networks/supervised learning, second order optimization methods, and interior point methods, where we also discuss some of the convergence analysis. Throughout we discuss many relevant applications. This course is part of the MasterMath program. Information about the course (description, organization, examination and prerequisites) can be found on http://www.mastermath.nl/.
The UT contact person for this course is M.J. Uetz.
Lecturer (2022/2023): Daniel Dadusch (CWI)
Prerequisites
The student should have a solid bachelor level knowledge linear algebra and multivariate analysis. The student should also have knowledge of linear optimization and convex analysis to the level of being able to follow the text and do the exercises from the following:
Linear Programming, A Concise Introduction, Thomas S. Ferguson:
Available at https://www.math.ucla.edu/~tom/LP.pdf
Chapters 1 and 2, along with the accompanying exercises.
Convex Optimization, Stephen Boyd and Lieven Vandenberghe:
Available at http://stanford.edu/~boyd/cvxbook/
Sections: 2.1, 2.2 and 3.1.
Exercises (from the book): 2.1, 2.2, 2.12, 3.1, 3.3, 3.5 and 3.7
|
 |
|