Abstract: We introduce a new framework for the theoretical analysis of learning algorithms based on the new notion of self-regularization, meaning that the algorithm itself produces sufficiently regular functions. As a central example, we analyze gradient descent and show that this framework yields minmax-optimal learning rates in broad settings with comparatively little technical effort.
21
Apr
MIP Seminar: Max Schölpple (University of Stuttgart)
Date:
- Tue:
- 4:15 pm - 6:00 pm
21 April 2026