DOLA – Chair of Dynamics of Learning Algorithms
At DOLA, our goal is to understand the mechanisms behind the key algorithms used in machine learning and signal processing. What do they learn? How do they learn and how fast? When do they fail or succeed? How to improve them? To fulfill this objective, we study the optimization, statistical and functional approximation aspects — and their interplay — often in certain asymptotic regimes that facilitate mathematical analysis. Our current interest is on centered on gradient methods for two-layer neural networks, sparse deconvolution and divergences between probability measures (including the Wasserstein distance).