Kies de Nederlandse taal
Course module: 201800177
Deep Learning - From Theory to Practice
Course info
Course module201800177
Credits (ECTS)5
Course typeCourse
Language of instructionEnglish
Contact personprof.dr. C. Brune
dr. N. Botteghi
Contactperson for the course
prof.dr. C. Brune
prof.dr. C. Brune
dr. L. Spek
J.M. Suk
Academic year2022
Starting block
Application procedureYou apply via OSIRIS Student
Registration using OSIRISYes
After following this course, students are expected to:
  1. Be able to formulate a deep learning problem mathematically by choosing an adequate deep network architecture and operators (Modeling).
  2. Be able to derive the backpropagation principle for deep learning networks and verify its relation to stochastic gradient descent (Optimization).
  3. Be able to use and extend a state-of-the-art software package for deep learning in Python and Matlab to solve a real-world example (Programming).
  4. Be able to derive and apply a variational auto-encoder and a generative adversarial network for a specific problem of data interpolation (Data Compression).
  5. Be able to validate a given deep learning model based on robustness and generalization properties (Understanding).
In recent years, in big data science and machine learning, deep artificial neural networks have won numerous contests, e.g. in computer vision, pattern recognition, or medical imaging, often reaching human performance. Although many excellent practical toolboxes, e.g. for convolutional neural networks, exist, many people see deep learning only as a black box. This course provides students with the mathematical background needed to better understand, analyze and apply models through algorithms at the heart of deep learning. The course will equip students with new insights into deep learning theory with a direct impact on deep learning applications.
The main topics include:
  • Introduction to deep learning and convolutional neural networks
  • Backpropagation and basic stochastic optimization
  • Basic auto-encoders and data compression
  • Deep architectures with dropout and sparsity
  • Advanced optimization methods
  • Sequential learning and recurrent neural networks
  • Variational auto-encoders and generative adversarial networks
  • Deep reinforcement learning
  1. Written exam (60%)
  2. Hand-in assignments (20%)
  3. Final project (20%)
Mandatory prior knowledge:
Students need to have a solid background in multivariate calculus, linear algebra and basic programming experience in Python. Basic knowledge about numerical methods for differential equations and basic knowledge about optimization methods is mandatory, for instance from:
B-AM M6: Dynamical Systems
B-AM, B-TCS M7: Discrete Structures and Efficient Algorithms.

Recommended prior knowledge:
Basic knowledge of machine learning, e.g. via the courses Machine Learning I (201600070) or Statistical Learning (201900115), is recommended and facilitates the students’ comprehension of introductory concepts addressing deep learning practice. Students with a theoretical focus will find skills obtained from the MSc math courses scientific computing, information theory or complex networks helpful for this.
Participating study
Master Applied Mathematics
Participating study
Master Computer Science
Participating study
Master Electrical Engineering
Participating study
Master Interaction Technology
Participating study
Master Robotics
Required materials
Recommended materials
Course material
“Neural Networks and Deep Learning”, by Charu C. Aggarwal, Springer, 2018 (available digitally)
Course material
“Deep Learning” by Ian Goodfellow, Yoshua Bengio and Aaron Courville, MIT Press, 2016 (available digitally)
Instructional modes


Exam, Assignments, Project

Kies de Nederlandse taal