Back to Search
Start Over
RobustBench: a standardized adversarial robustness benchmark
- Publication Year :
- 2020
-
Abstract
- As a research community, we are still lacking a systematic understanding of the progress on adversarial robustness which often makes it hard to identify the most promising ideas in training robust models. A key challenge in benchmarking robustness is that its evaluation is often error-prone leading to robustness overestimation. Our goal is to establish a standardized benchmark of adversarial robustness, which as accurately as possible reflects the robustness of the considered models within a reasonable computational budget. To this end, we start by considering the image classification task and introduce restrictions (possibly loosened in the future) on the allowed models. We evaluate adversarial robustness with AutoAttack, an ensemble of white- and black-box attacks, which was recently shown in a large-scale study to improve almost all robustness evaluations compared to the original publications. To prevent overadaptation of new defenses to AutoAttack, we welcome external evaluations based on adaptive attacks, especially where AutoAttack flags a potential overestimation of robustness. Our leaderboard, hosted at https://robustbench.github.io/, contains evaluations of 120+ models and aims at reflecting the current state of the art in image classification on a set of well-defined tasks in $\ell_\infty$- and $\ell_2$-threat models and on common corruptions, with possible extensions in the future. Additionally, we open-source the library https://github.com/RobustBench/robustbench that provides unified access to 80+ robust models to facilitate their downstream applications. Finally, based on the collected models, we analyze the impact of robustness on the performance on distribution shifts, calibration, out-of-distribution detection, fairness, privacy leakage, smoothness, and transferability.<br />Comment: The camera-ready version accepted at the NeurIPS'21 Datasets and Benchmarks Track: 120+ evaluations, 80+ models, 7 leaderboards (Linf, L2, common corruptions; CIFAR-10, CIFAR-100, ImageNet), significantly expanded analysis part (calibration, fairness, privacy leakage, smoothness, transferability)
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2010.09670
- Document Type :
- Working Paper