This was part of
Machine Learning and Mean-Field Games
DISTRIBUTIONALLY ROBUST LEARNING OVER DEEP NEURAL NETWORKS AND THEIR ASSOCIATED REGULARIZED RISK
Camilo A Garcia Trillos, University College London
Tuesday, May 24, 2022
Abstract: In this talk, I will explore the relation between distributionally robust learning and different forms of regularization to enforce robustness of deep neural networks. In particular, I will focus on an adversarial-type problem, where we train the network to be robust within a Wasserstein ball around a set of reference data. Using tools from optimal transport theory, we derive first order and second order approximations to the distributionally robust problem in terms of appropriate regularized risk minimization problems.
In the context of ResNets, we can use the above connection and a Pontryagin maximum principle to motivate a family of scalable algorithms for the training of robust neural networks. Our analysis recovers some results and algorithms known in the literature and provides other theoretical and algorithmic insights that to our knowledge are novel.
Joint work with Nicolas Garcia Trillos.