This workshop is part of the Bernoulli Center programme at EPFL, to be held from 12 to 16 May 2025.
Neural networks, and deep learning in particular, have had unprecedented success across many domains in science, technology, and computing, leading to immense societal, commercial, and scientific impact. However, unlike classical domains of computer science, our theoretical understanding of their optimization dynamics, internal representations, generalization properties, and robustness remains limited. The training of neural networks is essentially a noisy collective process in high-dimensional spaces, where learning emerges from the interaction of many degrees of freedom (model weights) and input data. Thus, it is not surprising that the intersection of theoretical physics and deep learning has become a fertile ground for advancing their understanding. Tools from statistical mechanics, field theory, nonlinear dynamics, and disordered and complex systems have proven invaluable for addressing some of the core challenges in machine learning, such as explaining phenomenological scaling laws, uncovering limiting behaviors, interpreting the latent representations of data, and analyzing the effects of stochasticity and correlations in learning.
The Lausanne Event on Machine Learning and Neural Network Theory (LEMAN-TH 2025) will bring together physicists, but also mathematicians and computer scientists, whose work focuses on these topics. Specifically, this event will focus on diffusion, generative models, LLMs, attention and representational learning.