Fundamental principles of learning and representation: from Brains to LLMs

This workshop is part of the Bernoulli Center programme at EPFL, to be held from 11 to 13 May 2026.

Today, the progress of AI capabilities far outpaces Moore’s law. More concerningly, it also exceeds the pace at which we understand how these systems learn, reason, and their reliability. AI is rapidly permeating many facets of society, including scientific research. Fundamental research in AI is therefore essential—not only to ensure its safe and sustainable development, but also to effectively integrate it as a scientific tool.

One of the most remarkable recent achievements in AI is the advent of Large Language Models (LLMs), like ChatGPT. These generative language models are based on deep transformer architectures pre-trained on massive text corpora. They learn to produce grammatically coherent text solely from examples—a capability that many linguists once deemed impossible. Identifying the textual correlations these models exploit, understanding the emergence of hierarchical language representation, and how these representations encode grammar and semantics are among the central questions of our time. This workshop will explore these questions guided by foundational theories and the development of synthetic, structured data models, as well as taking inspiration from the study of language as a human neuro-cognitive system.

 

Start date & time

11/05/2026

End date & time

13/05/2026

Organisers

Francesco Cagnetta, SISSA
Andrey Gromov, META
Clement Hongler, EPFL
Matthieu Wyart, EPFL