Back to Search Start Over

Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models

Authors :
He, Zhiwei
Zhou, Binglin
Hao, Hongkun
Liu, Aiwei
Wang, Xing
Tu, Zhaopeng
Zhang, Zhuosheng
Wang, Rui
Publication Year :
2024

Abstract

Text watermarking technology aims to tag and identify content produced by large language models (LLMs) to prevent misuse. In this study, we introduce the concept of cross-lingual consistency in text watermarking, which assesses the ability of text watermarks to maintain their effectiveness after being translated into other languages. Preliminary empirical results from two LLMs and three watermarking methods reveal that current text watermarking technologies lack consistency when texts are translated into various languages. Based on this observation, we propose a Cross-lingual Watermark Removal Attack (CWRA) to bypass watermarking by first obtaining a response from an LLM in a pivot language, which is then translated into the target language. CWRA can effectively remove watermarks, decreasing the AUCs to a random-guessing level without performance loss. Furthermore, we analyze two key factors that contribute to the cross-lingual consistency in text watermarking and propose X-SIR as a defense method against CWRA. Code: https://github.com/zwhe99/X-SIR.<br />Comment: ACL 2024 (main conference)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.14007
Document Type :
Working Paper