Back to Search Start Over

Learning Distributionally Robust Models at Scale via Composite Optimization

Authors :
Haddadpour, Farzin
Kamani, Mohammad Mahdi
Mahdavi, Mehrdad
Karbasi, Amin
Publication Year :
2022

Abstract

To train machine learning models that are robust to distribution shifts in the data, distributionally robust optimization (DRO) has been proven very effective. However, the existing approaches to learning a distributionally robust model either require solving complex optimization problems such as semidefinite programming or a first-order method whose convergence scales linearly with the number of data samples -- which hinders their scalability to large datasets. In this paper, we show how different variants of DRO are simply instances of a finite-sum composite optimization for which we provide scalable methods. We also provide empirical results that demonstrate the effectiveness of our proposed algorithm with respect to the prior art in order to learn robust models from very large datasets.<br />Comment: Accepted to ICLR2022 as a conference paper. International Conference on Learning Representations (2022)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.09607
Document Type :
Working Paper