Title: Robust deep learning and generative models
The great empirical success of neural networks is upset by their fragility in the presence of mismatched or adversarially perturbed data. To overcome such issues, In this research project we study the worst-case robustness of neural networks as measured by their Lipschitz constant. We develop scalable optimization algorithms for its computation and we leverage such quantity to provide formal certificates of robustness. In order to train robust neural networks we develop algorithms that incorporate a penalty on an upper bound of their Lipschitz constant, namely, the complexity measure known as the 1-path-norm. This is a challenging task, given the non-convexity and non-smoothness of the underlying objective function. Coincidentally, it is also crucial to control the Lipschitzness of the discriminator network in the Generative Adversarial Networks (GANs) framework. Hence, we also explore the use of our developed algorithms in this context.