Kies de Nederlandse taal
Course module: 191581200
Continuous Optimization
Course info
Course module191581200
Credits (ECTS)6
Course typeCourse
Language of instructionEnglish
Contact personprof.dr. M.J. Uetz
Externe Docent
prof.dr. M.J. Uetz
Contactperson for the course
prof.dr. M.J. Uetz
Academic year2022
Starting block
RemarksSemester course; runs over quartiles 1A and 1B.
This course is part of the Mastermath programme.
Application procedureYou apply via OSIRIS Student
Registration using OSIRISYes
Continuous optimization is the branch of optimization where we optimize a (differentiable) function over continuous (as opposed to discrete) variables. Here the variables can be constrained by (differentiable) equality and inequality constraints as well as by convex cone constraints. Optimization problems like this occur naturally and commonly in science and engineering and also occur as relaxations of discrete optimization problems. Differentiability of the functions defining the problems allows for the use of multivariable calculus and linear algebra techniques to study the problems and to design and analyze efficient algorithms.
This course aims to provide a concise introduction into the basics of continuous unconstrained, constrained and conic optimization.

Learning goals
The student will be able to:
  • Prove results on (convex) optimization problems.
  • Solve the KKT conditions for basic constrained optimization problems.
  • Be able to formulate the Lagrange dual, and understand/prove basic results on these problems.
  • Give both sufficient and necessary optimality conditions for constrained continuous optimization problems.
  • Use a range of techniques to solve both unconstrained and constrained continuous optimization problems, and prove results on these techniques.
  • Formulate and recognize conic optimization problems, along with being able to construct their dual problems.
In continuous optimization the variables take on continuous (as opposed to discrete) values, and the objective and constraints are typically differentiable. This allows for the use of (multivariable) calculus techniques to study the problems and their solutions, and to design and analyze efficient algorithms for finding solutions. In this course we study the theory, algorithms, and applications of continuous optimization. In the theory part we discuss Lagrangian duality, optimality conditions, convexity, and conic programming. In the algorithmic part we discuss first order optimization methods, neural networks/supervised learning, second order optimization methods, and interior point methods, where we also discuss some of the convergence analysis. Throughout we discuss many relevant applicationsThis course is part of the MasterMath program. Information about the course (description, organization, examination and prerequisites) can be found on

The UT contact person for this course is M.J. Uetz.

Lecturer (2022/2023): Daniel Dadusch (CWI)

The student should have a solid bachelor level knowledge linear algebra and multivariate analysis. The student should also have knowledge of linear optimization and convex analysis to the level of being able to follow the text and do the exercises from the following:

Linear Programming, A Concise Introduction, Thomas S. Ferguson:
Available at
Chapters 1 and 2, along with the accompanying exercises.
Convex Optimization, Stephen Boyd and Lieven Vandenberghe:
Available at
Sections: 2.1, 2.2 and 3.1.
Exercises (from the book): 2.1, 2.2, 2.12, 3.1, 3.3, 3.5 and 3.7
Participating study
Master Applied Mathematics
Required materials
Recommended materials
Course material
The materials will be available online or provided by the lecturer.
Instructional modes

Written exam

Kies de Nederlandse taal