Back to Search Start Over

LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario

Authors :
Liu, Hongyi
Liu, Zirui
Tang, Ruixiang
Yuan, Jiayi
Zhong, Shaochen
Chuang, Yu-Neng
Li, Li
Chen, Rui
Hu, Xia
Publication Year :
2024

Abstract

Fine-tuning LLMs is crucial to enhancing their task-specific performance and ensuring model behaviors are aligned with human preferences. Among various fine-tuning methods, LoRA is popular for its efficiency and ease to use, allowing end-users to easily post and adopt lightweight LoRA modules on open-source platforms to tailor their model for different customization. However, such a handy share-and-play setting opens up new attack surfaces, that the attacker can render LoRA as an attacker, such as backdoor injection, and widely distribute the adversarial LoRA to the community easily. This can result in detrimental outcomes. Despite the huge potential risks of sharing LoRA modules, this aspect however has not been fully explored. To fill the gap, in this study we thoroughly investigate the attack opportunities enabled in the growing share-and-play scenario. Specifically, we study how to inject backdoor into the LoRA module and dive deeper into LoRA's infection mechanisms. We found that training-free mechanism is possible in LoRA backdoor injection. We also discover the impact of backdoor attacks with the presence of multiple LoRA adaptions concurrently as well as LoRA based backdoor transferability. Our aim is to raise awareness of the potential risks under the emerging share-and-play scenario, so as to proactively prevent potential consequences caused by LoRA-as-an-Attack. Warning: the paper contains potential offensive content generated by models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.00108
Document Type :
Working Paper