This course aims to provide a concise introduction into the basics of convex and nonconvex continuous constrained optimization.
|
|
In continuous optimization the variables take on continuous (as opposed to discrete) values, and the objective and constraints are typically differentiable. This allows for the use of (multivariable) calculus techniques to study the problems and their solutions, and to design and analyze efficient algorithms for finding solutions. In this course we study the theory, algorithms, and applications of continuous optimization. In the theory part we discuss Lagrangian duality, optimality conditions, convexity, and conic programming. In the algorithmic part we discuss first order optimization methods, neural networks/supervised learning, second order optimization methods, and interior point methods, where we also discuss some of the convergence analysis. Throughout we discuss many relevant applications.This course is part of the MasterMath program. Information about the course (description, organization, examination and prerequisites) can be found on http://www.mastermath.nl/. The UT contact person for this course is M.J. Uetz.
Assumed knowledge
The student should have a solid knowledge of linear algebra and multivariable calculus. The student should also have knowledge of linear programming (including linear programming duality) and convex analysis to the level of being able to follow the text and do the exercises from:
- Chapters 1 and 2 including all exercises from the book 'Linear Programming, A Concise Introduction, Thomas S. Ferguson' (https://www.math.ucla.edu/~tom/LP.pdf)
- Exercises 2.1, 2.2, 2.12, 3.1, 3.3, 3.5, and 3.7 from the book 'Convex Optimization, Stephen Boyd and Lieven Vandenberghe' (http://stanford.edu/~boyd/cvxbook)
|
 |
|