Back to Search Start Over

Eraser: Jailbreaking Defense in Large Language Models via Unlearning Harmful Knowledge

Authors :
Lu, Weikai
Zeng, Ziqian
Wang, Jianwei
Lu, Zhengdong
Chen, Zelin
Zhuang, Huiping
Chen, Cen
Publication Year :
2024

Abstract

Jailbreaking attacks can enable Large Language Models (LLMs) to bypass the safeguard and generate harmful content. Existing jailbreaking defense methods have failed to address the fundamental issue that harmful knowledge resides within the model, leading to potential jailbreak risks for LLMs. In this paper, we propose a novel defense method called Eraser, which mainly includes three goals: unlearning harmful knowledge, retaining general knowledge, and maintaining safety alignment. The intuition is that if an LLM forgets the specific knowledge required to answer a harmful question, it will no longer have the ability to answer harmful questions. The training of Erase does not actually require the model's own harmful knowledge, and it can benefit from unlearning general answers related to harmful queries, which means it does not need assistance from the red team. The experimental results show that Eraser can significantly reduce the jailbreaking success rate for various attacks without compromising the general capabilities of the model. Our codes are available at https://github.com/ZeroNLP/Eraser.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.05880
Document Type :
Working Paper