Dr. Etienne Boursier

Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs

 September 8, 2022 | Time 10:00am CET

The training of neural networks by gradient descent methods is a cornerstone of the deep learning revolution. Yet, despite some recent progress, a complete theory explaining its success is still missing. This article presents, for orthogonal input vectors, a precise description of the gradient flow dynamics of training one-hidden layer ReLU neural networks for the mean squared error at small initialisation. In this setting, despite non-convexity, we show that the gradient flow converges to zero loss and characterise its implicit bias towards minimum variation norm. Furthermore, some interesting phenomena are highlighted: a quantitative description of the initial alignment phenomenon and a proof that the process follows a specific saddle to saddle dynamics.

Etienne Boursier completed his PhD at ENS Paris-Saclay in September 2021, under the supervision of Vianney Perchet and entitled “Statistical Learning in a strategical environment”. During his PhD, he studied multi-agent learning, combining (online) learning with game theoretical tools. In particular, he mainly focused on the problem of Multiplayer Multi-armed bandits, but also worked on other bandits related problems, Social Learning and Utility/Privacy trade-off. Since October 2021, he is a postdoc in the Theory of Machine Learning Lab led by Nicolas Flammarion at EPFL. He is currently focusing on multitask/meta-learning and also providing theoretical insights on the empirical success of nonlinear neural networks.