1. Optimization-based Prompt Injection Attack to LLM-as-a-Judge
- Author
-
Shi, Jiawen, Yuan, Zenghui, Liu, Yinuo, Huang, Yue, Zhou, Pan, Sun, Lichao, Gong, Neil Zhenqiang, Shi, Jiawen, Yuan, Zenghui, Liu, Yinuo, Huang, Yue, Zhou, Pan, Sun, Lichao, and Gong, Neil Zhenqiang
- Abstract
LLM-as-a-Judge is a novel solution that can assess textual information with large language models (LLMs). Based on existing research studies, LLMs demonstrate remarkable performance in providing a compelling alternative to traditional human assessment. However, the robustness of these systems against prompt injection attacks remains an open question. In this work, we introduce JudgeDeceiver, a novel optimization-based prompt injection attack tailored to LLM-as-a-Judge. Our method formulates a precise optimization objective for attacking the decision-making process of LLM-as-a-Judge and utilizes an optimization algorithm to efficiently automate the generation of adversarial sequences, achieving targeted and effective manipulation of model evaluations. Compared to handcraft prompt injection attacks, our method demonstrates superior efficacy, posing a significant challenge to the current security paradigms of LLM-based judgment systems. Through extensive experiments, we showcase the capability of JudgeDeceiver in altering decision outcomes across various cases, highlighting the vulnerability of LLM-as-a-Judge systems to the optimization-based prompt injection attack.
- Published
- 2024