14 Apr

MIP Seminar: Mariia Seleznova (LMU Munich)

Date:

Tue:
4:15 pm - 6:00 pm

14 April 2026

Location:

Room B349 Theresienstr. 39 Zoom room: https://lmu-munich.zoom-x.de/j/65568681308?pwd=XRPpwu055SZdJJOjaGjQFzNGCdF5Xa.1 80333, München

Abstract: Depth plays a central role in modern deep learning, yet its probabilistic effects are subtle and are not fully captured by classical theories that primarily focus on the infinite-width limit. This talk explores how jointly scaling depth and width shapes the signal-propagation statistics of wide neural networks under two contrasting regimes: fully connected feedforward networks with independent weights across layers, and recurrent networks with shared weights. In feedforward networks, standard infinite-width analyses allow to stabilize forward and backward variance, ensuring well-behaved initialization. However, finite-width fluctuations accumulate with depth, breaking convergence to the Neural Tangent Kernel (NTK) regime. In contrast, in linear recurrent networks, finite-width effects already destabilize the forward-propagation variance, rendering conventional initialization schemes inadequate for long input sequences. Together, these results show that depth affects feedforward and recurrent architectures in qualitatively distinct ways that cannot be captured by infinite-width approximations.