“Catastrophic overfitting is a bug but also a feature”
September 7, 2022 | Time 10:30am CET

Despite clear computational advantages in building robust neural networks, adversarial training (AT) using single-step methods is unstable as it suffers from catastrophic overfitting (CO): Networks gain non-trivial robustness during the first stages of adversarial training, but suddenly reach a breaking point where they quickly lose all robustness in just a few iterations. Although some works have succeeded at preventing CO, the different mechanisms that lead to this remarkable failure mode are still poorly understood. In this work, however, we find that the interplay between the structure of the data and the dynamics of AT plays a fundamental role in CO. Specifically, through active interventions on typical datasets of natural images, we establish a causal link between the structure of the data and the onset of CO in single-step AT methods. This new perspective provides important insights into the mechanisms that lead to CO and paves the way towards a better understanding of the general dynamics of robust model construction.
Guillermo Ortiz-Jimenez is a fourth-year PhD student at EPFL working under the supervision of Pascal Frossard. His research focuses on understanding deep learning using empirical methods with a focus on robustness and generalization. During his PhD, Guillermo has visited the University of Oxford as part of the ELLIS PhD program, where he is co-supervised by Philip Torr. He is currently a research intern at Google in Zurich. Before starting his PhD, Guillermo received his MSc. in Electrical Engineering from TU Delft, Netherlands, and his BSc. in Telecommunications Engineering from Universidad Politécnica de Madrid, Spain. He ranked first at both institutions.