Back to Search Start Over

NoisyMix: Boosting Model Robustness to Common Corruptions

Authors :
Erichson, N. Benjamin
Lim, Soon Hoe
Xu, Winnie
Utrera, Francisco
Cao, Ziang
Mahoney, Michael W.
Publication Year :
2022

Abstract

For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic. Relatedly, data augmentation schemes have been shown to improve robustness with respect to input perturbations and domain shifts. Motivated by this, we introduce NoisyMix, a novel training scheme that promotes stability as well as leverages noisy augmentations in input and feature space to improve both model robustness and in-domain accuracy. NoisyMix produces models that are consistently more robust and that provide well-calibrated estimates of class membership probabilities. We demonstrate the benefits of NoisyMix on a range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Moreover, we provide theory to understand implicit regularization and robustness of NoisyMix.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2202.01263
Document Type :
Working Paper