Back to Search Start Over

Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective

Authors :
Xing, Yue
Lin, Xiaofeng
Song, Qifan
Xu, Yi
Zeng, Belinda
Cheng, Guang
Publication Year :
2024

Abstract

Pre-training is known to generate universal representations for downstream tasks in large-scale deep learning such as large language models. Existing literature, e.g., \cite{kim2020adversarial}, empirically observe that the downstream tasks can inherit the adversarial robustness of the pre-trained model. We provide theoretical justifications for this robustness inheritance phenomenon. Our theoretical results reveal that feature purification plays an important role in connecting the adversarial robustness of the pre-trained model and the downstream tasks in two-layer neural networks. Specifically, we show that (i) with adversarial training, each hidden node tends to pick only one (or a few) feature; (ii) without adversarial training, the hidden nodes can be vulnerable to attacks. This observation is valid for both supervised pre-training and contrastive learning. With purified nodes, it turns out that clean training is enough to achieve adversarial robustness in downstream tasks.<br />Comment: To appear in AISTATS2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.15248
Document Type :
Working Paper