1. Demonstration Attack against In-Context Learning for Code Intelligence
- Author
-
Ge, Yifei, Sun, Weisong, Lou, Yihang, Fang, Chunrong, Zhang, Yiran, Li, Yiming, Zhang, Xiaofang, Liu, Yang, Zhao, Zhihong, and Chen, Zhenyu
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Software Engineering - Abstract
Recent advancements in large language models (LLMs) have revolutionized code intelligence by improving programming productivity and alleviating challenges faced by software developers. To further improve the performance of LLMs on specific code intelligence tasks and reduce training costs, researchers reveal a new capability of LLMs: in-context learning (ICL). ICL allows LLMs to learn from a few demonstrations within a specific context, achieving impressive results without parameter updating. However, the rise of ICL introduces new security vulnerabilities in the code intelligence field. In this paper, we explore a novel security scenario based on the ICL paradigm, where attackers act as third-party ICL agencies and provide users with bad ICL content to mislead LLMs outputs in code intelligence tasks. Our study demonstrates the feasibility and risks of such a scenario, revealing how attackers can leverage malicious demonstrations to construct bad ICL content and induce LLMs to produce incorrect outputs, posing significant threats to system security. We propose a novel method to construct bad ICL content called DICE, which is composed of two stages: Demonstration Selection and Bad ICL Construction, constructing targeted bad ICL content based on the user query and transferable across different query inputs. Ultimately, our findings emphasize the critical importance of securing ICL mechanisms to protect code intelligence systems from adversarial manipulation., Comment: 17 pages, 5 figures
- Published
- 2024