Prof Holger Rauhut teaching

Teaching

Overview of the courses we provide at LMU Munich

Change content

Mathematical Foundation of Deep Learning moodle LSF

Deep Learning is the basic methodology at the core of most modern technology in machine learning and artificial intelligense. The course Mathematical Foundations of Machine Learning II - Deep Learning aims at covering mathematical theory of deep learning. Deep neural networks are functions that are constructed as compositions of simple functions called layers, which in turn are affine functions composed with a componentwise nonlinear function, the so-called activation functions. In the learning process, the parameters of these functions need to be adapted to training data with the aim to make predictions on new data based on the learned neural network. While the mathematical theory of deep learning is current rather in its beginning and developing quickly, there are a number of mathematical principles and key results that are understood by now, which will be the subject of this course.
In particular, we plan to cover the following mathematical topics:
Approximation: How well can we approximate a given function with a neural network?
Optimization: The learning process consists in solving a high-dimensional and non-convex optimization problem over the parameters of a neural network.
Generalization: Based on probabilistic assumptions on the ground truth and the training data, it is possible to estimate how well a learned neural network performs on future data.

Analysis 1

The course provides the rigorous mathematical foundations of calculus, establishing the theoretical framework for concepts such as limits, continuity, differentiation, and integration. The course introduces students to the epsilon-delta formalism and trains them in constructing precise mathematical proofs. Throughout the lecture, students will develop both computational skills and a deep conceptual understanding of real analysis, learning to work with sequences, series, and functions on the real line. The focus will be on single-variable analysis, building the essential groundwork for advanced mathematical studies.

Mathematical Foundations of Deep Learning moodle LSF

See Current Semester for a description of the course.

Mathematical Foundations of Machine Learning moodle LSF

The goal of the course is to gain basic understanding of the theoretical foundations of machine learning. In particular, the course will emphasize on the mathematical formulation of machine learning concepts and algorithms as well as their rigorous mathematical analysis.
Content of the course:
Part I: Statistical Learning Theory:

  • Supervised Learning, classification and regression problems
  • The framework of Probably Approximately Correct (PAC) Learning
  • Tools from Probability Theory (Markov's, Hoeffding's inequality, Cramér's theorem)
  • Empirical Risk Minimization
  • No Free Lunch Theorem of Supervised learning
  • Bias-Complexity Trade-off
  • Generalization Error Bounds via Rademacher complexity and VC dimension
Part II: Learning Algorithms and their Analysis
  • Support Vector Machines
  • Stochastic Gradient Descent
  • Neural Networks (Deep Learning)

High Dimensional Probability Theory moodle LSF

High dimensional probability theory studies the behavior of random phenomena in spaces where the number of dimensions grows large, often revealing counter-intuitive effects that do not appear in low dimensions. The course covers fundamental concentration inequalities, random matrix theory, and geometric properties of high-dimensional distributions. Throughout the lecture, students will learn how classical probabilistic intuition can fail in high dimensions and develop tools to analyze complex stochastic systems arising in modern applications. The focus will be on asymptotic behavior and dimensional scaling, with applications to statistics, data science, and theoretical computer science.

Compressive Sensing moodle LSF

Compressive sensing is a paradigm that exploits sparsity to recover high-dimensional signals from a remarkably small number of measurements, challenging the traditional Nyquist sampling theorem. The course covers the mathematical foundations of sparse signal recovery, including restricted isometry properties, coherence conditions, and reconstruction algorithms such as basis pursuit and greedy methods. Throughout the lecture, students will learn when exact or approximate recovery is guaranteed and how to design efficient measurement matrices for various applications. The focus will be on the interplay between linear algebra, optimization, and probability theory, with applications ranging from medical imaging to signal processing.

Optimization Methods moodle LSF

Optimization is the doctrine for finding the "best" alternative between a set of possible options in terms of a given objective function. The course is devoted to the study of the most widely used optimization methods and their convergence analysis. Throughout the lecture, the students will learn how to select the most suited optimization method for a given problem and to evaluate the expected rate of convergence of the algorithm in that specific scenario. The focus will be continuous optimization, meaning that we will consider problems with continuous variables living in a continuous vector space.

Mathematical Foundations of Machine Learning

See Previous Semester for a description of the course and the links to the moodle and LSF pages.

Optimization Methods

See Previous Semester for a description of the course and the links to the moodle and LSF pages.