Back to Search
Start Over
Revisiting Semi-supervised Adversarial Robustness via Noise-aware Online Robust Distillation
- Publication Year :
- 2024
-
Abstract
- The robust self-training (RST) framework has emerged as a prominent approach for semi-supervised adversarial training. To explore the possibility of tackling more complicated tasks with even lower labeling budgets, unlike prior approaches that rely on robust pretrained models, we present SNORD - a simple yet effective framework that introduces contemporary semi-supervised learning techniques into the realm of adversarial training. By enhancing pseudo labels and managing noisy training data more effectively, SNORD showcases impressive, state-of-the-art performance across diverse datasets and labeling budgets, all without the need for pretrained models. Compared to full adversarial supervision, SNORD achieves a 90% relative robust accuracy under epsilon = 8/255 AutoAttack, requiring less than 0.1%, 2%, and 10% labels for CIFAR-10, CIFAR-100, and TinyImageNet-200, respectively. Additional experiments confirm the efficacy of each component and demonstrate the adaptability of integrating SNORD with existing adversarial pretraining strategies to further bolster robustness.<br />Comment: 12 pages, 4 figures, 9 tables
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2409.12946
- Document Type :
- Working Paper