Back to Search Start Over

Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models

Authors :
Li, Jiahui
Hao, Yongchang
Xu, Haoyu
Wang, Xing
Hong, Yu
Publication Year :
2024

Abstract

Despite the advancements in training Large Language Models (LLMs) with alignment techniques to enhance the safety of generated content, these models remain susceptible to jailbreak, an adversarial attack method that exposes security vulnerabilities in LLMs. Notably, the Greedy Coordinate Gradient (GCG) method has demonstrated the ability to automatically generate adversarial suffixes that jailbreak state-of-the-art LLMs. However, the optimization process involved in GCG is highly time-consuming, rendering the jailbreaking pipeline inefficient. In this paper, we investigate the process of GCG and identify an issue of Indirect Effect, the key bottleneck of the GCG optimization. To this end, we propose the Model Attack Gradient Index GCG (MAGIC), that addresses the Indirect Effect by exploiting the gradient information of the suffix tokens, thereby accelerating the procedure by having less computation and fewer iterations. Our experiments on AdvBench show that MAGIC achieves up to a 1.5x speedup, while maintaining Attack Success Rates (ASR) on par or even higher than other baselines. Our MAGIC achieved an ASR of 74% on the Llama-2 and an ASR of 54% when conducting transfer attacks on GPT-3.5. Code is available at https://github.com/jiah-li/magic.<br />Comment: 13 pages,2 figures, accepted by The 31st International Conference on Computational Linguistics

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.08615
Document Type :
Working Paper