Back to Search Start Over

Downstream-agnostic Adversarial Examples

Authors :
Zhou, Ziqi
Hu, Shengshan
Zhao, Ruizhi
Wang, Qian
Zhang, Leo Yu
Hou, Junhui
Jin, Hai
Publication Year :
2023

Abstract

Self-supervised learning usually uses a large amount of unlabeled data to pre-train an encoder which can be used as a general-purpose feature extractor, such that downstream users only need to perform fine-tuning operations to enjoy the benefit of "large model". Despite this promising prospect, the security of pre-trained encoder has not been thoroughly investigated yet, especially when the pre-trained encoder is publicly available for commercial use. In this paper, we propose AdvEncoder, the first framework for generating downstream-agnostic universal adversarial examples based on the pre-trained encoder. AdvEncoder aims to construct a universal adversarial perturbation or patch for a set of natural images that can fool all the downstream tasks inheriting the victim pre-trained encoder. Unlike traditional adversarial example works, the pre-trained encoder only outputs feature vectors rather than classification labels. Therefore, we first exploit the high frequency component information of the image to guide the generation of adversarial examples. Then we design a generative attack framework to construct adversarial perturbations/patches by learning the distribution of the attack surrogate dataset to improve their attack success rates and transferability. Our results show that an attacker can successfully attack downstream tasks without knowing either the pre-training dataset or the downstream dataset. We also tailor four defenses for pre-trained encoders, the results of which further prove the attack ability of AdvEncoder.<br />Comment: This paper has been accepted by the International Conference on Computer Vision (ICCV '23, October 2--6, 2023, Paris, France)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.12280
Document Type :
Working Paper