Adversarial learning corresponds to trying to build models which are robust to malicious adversaries.
We are interested in developing a better understanding of robustness of machine learning models to small, worst-case changes in the inputs known as adversarial examples. For example, we have focused on studying intriguing phenomena in this area such as catastrophic and robust overfitting. Moreover, we are also interested in improving robustness evaluation standards and understanding the effect of adversarial robustness on other tasks (e.g., robustness to common image corruptions). Recently, we also got interested in understanding the role of robustness in the parameter space and its effect on generalization.
F. Croce, M. Andriushchenko, V. Sehwag, E. Debenedetti, N. Flammarion, M. Chiang, P. Mittal, M. Hein, RobustBench: a standardized adversarial robustness benchmark, Neurips Dataset and Benchmark Track 2021
M. Andriushchenko, N. Flammarion, Understanding and Improving Fast Adversarial Training, Neurips 2020
M. Andriushchenko, F. Croce, N. Flammarion, M. Hein, Square Attack: a query-efficient black-box adversarial attack via random search, ECCV 2020
Robust learning is a critical field that seeks to develop efficient algorithms that can recover an underlying model despite possibly malicious corruptions in the data. In recent decades, being able to deal with corrupted measurements has become of crucial importance. The applications are considerable, to name a few settings: computer vision, economics, astronomy, biology and above all, safety-critical systems.
S. Pesme, N. Flammarion, Online Robust Regression via SGD on the l1 loss, Neurips 2020
Y. Cherapanamjeri, N. Flammarion, P. L. Bartlett, Fast Mean Estimation with Sub-Gaussian Rates, COLT 2018