Upon completion of this course the student:
- can decide algorithmically if a system of linear equalities is feasible both for integer and real variables, and if it is infeasible provide a certificate for this;
- can decide algorithmically if a system of linear inequalities is feasible both for real variables, and if it is infeasible provide a certificate for this;
- can decide algorithmically if a symmetric matrix is positive definite or positive semi-definite;
- can apply the concept of orthogonal projection in different settings and understand its implications;
- can use the concepts of weak and strong LP duality and (approximate) complementary slackness to prove (approximate) optimality of solutions of linear programs;
- knows and can use properties of convex sets and convex functions in proofs;
- knows and can check optimality conditions for unconstrained optimization;
- knows the theoretical basis of and can apply different methods for solving unconstrained optimization problems.
|
|
Optimization problems appear in many real-life situations, such as production processes, industrial design, logistics and machine learning. This course aims to expand on algorithmic principles and fundamental ideas of linear optimization and give an introduction to techniques for nonlinear optimization. The course enables the student to follow more advanced courses in continuous and discrete optimization.
The course treats optimization of a function mainly in two different settings: 1) when the function is linear under linear (in)equality constraints, and 2) when the function is non-linear with no constraints. The first setting expands on the assumed prior knowledge with more algorithmic methods and proofs of theorems like Farkas’ lemma. We show that these results directly imply Strong duality of linear programs and we expand the concept of complementary slackness with approximate complementary slackness. We then study the properties of convex sets and convex functions, which play an important role in optimization and applied analysis. Finally, we treat non-linear, but also unconstrained, optimization, using concepts like descent methods, line search and Newton methods.
|
 |
|