Our research

Here are the main topics on which our group works.

Theoretical foundation of LLMs

We focus on uncovering the mathematical structures that underlie the advanced learning capabilities of large language models. Our goal is to provide theoretical guarantees for methods used by practioners and to enhance the capabilities and effectiveness of LLMs.

Theory of supervised deep learning

Understanding the performance of neural networks is certainly one of the most thrilling challenges for the current machine learning community.

Adversarial machine learning

Adversarial learning corresponds to trying to build models which are robust to malicious adversaries.

Meta learning

Machine learning techniques, such as multi-task learning and meta learning, have been successful in enabling effective learning from limited data.

Stochastic optimisation

 The tremendous success of machine learning in recent years is largely due to the impressive performances of stochastic optimisation algorithms such as SGD.

MCMC algorithms

Markov chain Monte Carlo (MCMC) algorithms are a powerful, computational tool for Bayesian inference.