1. Can Watermarked LLMs be Identified by Users via Crafted Prompts?
- Author
-
Liu, Aiwei, Guan, Sheng, Liu, Yiming, Pan, Leyi, Zhang, Yifei, Fang, Liancheng, Wen, Lijie, Yu, Philip S., and Hu, Xuming
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Computation and Language ,68T50 ,I.2.7 - Abstract
Text watermarking for Large Language Models (LLMs) has made significant progress in detecting LLM outputs and preventing misuse. Current watermarking techniques offer high detectability, minimal impact on text quality, and robustness to text editing. However, current researches lack investigation into the imperceptibility of watermarking techniques in LLM services. This is crucial as LLM providers may not want to disclose the presence of watermarks in real-world scenarios, as it could reduce user willingness to use the service and make watermarks more vulnerable to attacks. This work is the first to investigate the imperceptibility of watermarked LLMs. We design an identification algorithm called Water-Probe that detects watermarks through well-designed prompts to the LLM. Our key motivation is that current watermarked LLMs expose consistent biases under the same watermark key, resulting in similar differences across prompts under different watermark keys. Experiments show that almost all mainstream watermarking algorithms are easily identified with our well-designed prompts, while Water-Probe demonstrates a minimal false positive rate for non-watermarked LLMs. Finally, we propose that the key to enhancing the imperceptibility of watermarked LLMs is to increase the randomness of watermark key selection. Based on this, we introduce the Water-Bag strategy, which significantly improves watermark imperceptibility by merging multiple watermark keys., Comment: 25 pages, 5 figures, 8 tables
- Published
- 2024