Back to Search Start Over

Understanding Robustness of Parameter-Efficient Tuning for Image Classification

Authors :
Ruan, Jiacheng
Gao, Xian
Xiang, Suncheng
Xie, Mingye
Liu, Ting
Fu, Yuzhuo
Publication Year :
2024

Abstract

Parameter-efficient tuning (PET) techniques calibrate the model's predictions on downstream tasks by freezing the pre-trained models and introducing a small number of learnable parameters. However, despite the numerous PET methods proposed, their robustness has not been thoroughly investigated. In this paper, we systematically explore the robustness of four classical PET techniques (e.g., VPT, Adapter, AdaptFormer, and LoRA) under both white-box attacks and information perturbations. For white-box attack scenarios, we first analyze the performance of PET techniques using FGSM and PGD attacks. Subsequently, we further explore the transferability of adversarial samples and the impact of learnable parameter quantities on the robustness of PET methods. Under information perturbation attacks, we introduce four distinct perturbation strategies, including Patch-wise Drop, Pixel-wise Drop, Patch Shuffle, and Gaussian Noise, to comprehensively assess the robustness of these PET techniques in the presence of information loss. Via these extensive studies, we enhance the understanding of the robustness of PET methods, providing valuable insights for improving their performance in computer vision applications. The code is available at https://github.com/JCruan519/PETRobustness.<br />Comment: 5 pages, 2 figures. Work in Progress

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.09845
Document Type :
Working Paper