Back to Search Start Over

Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector

Authors :
Huang, Youcheng
Zhu, Fengbin
Tang, Jingkun
Zhou, Pan
Lei, Wenqiang
Lv, Jiancheng
Chua, Tat-Seng
Publication Year :
2024

Abstract

Visual Language Models (VLMs) are vulnerable to adversarial attacks, especially those from adversarial images, which is however under-explored in literature. To facilitate research on this critical safety problem, we first construct a new laRge-scale Adervsarial images dataset with Diverse hArmful Responses (RADAR), given that existing datasets are either small-scale or only contain limited types of harmful responses. With the new RADAR dataset, we further develop a novel and effective iN-time Embedding-based AdveRSarial Image DEtection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of VLMs, which we call the attacking direction, to achieve the detection of adversarial images against benign ones in the input. Extensive experiments with two victim VLMs, LLaVA and MiniGPT-4, well demonstrate the effectiveness, efficiency, and cross-model transferrability of our proposed method. Our code is available at https://github.com/mob-scu/RADAR-NEARSIDE

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.22888
Document Type :
Working Paper