Back to Search Start Over

Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation

Authors :
Huang, Tiansheng
Hu, Sihao
Ilhan, Fatih
Tekin, Selim Furkan
Liu, Ling
Publication Year :
2025

Abstract

Recent research shows that Large Language Models (LLMs) are vulnerable to harmful fine-tuning attacks -- models lose their safety alignment ability after fine-tuning on a few harmful samples. For risk mitigation, a guardrail is typically used to filter out harmful samples before fine-tuning. By designing a new red-teaming method, we in this paper show that purely relying on the moderation guardrail for data filtration is not reliable. Our proposed attack method, dubbed Virus, easily bypasses the guardrail moderation by slightly modifying the harmful data. Experimental results show that the harmful data optimized by Virus is not detectable by the guardrail with up to 100\% leakage ratio, and can simultaneously achieve superior attack performance. Finally, the key message we want to convey through this paper is that: \textbf{it is reckless to consider guardrail moderation as a clutch at straws towards harmful fine-tuning attack}, as it cannot solve the inherent safety issue of the pre-trained LLMs. Our code is available at https://github.com/git-disl/Virus

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.17433
Document Type :
Working Paper