Back to Search Start Over

Direct Unlearning Optimization for Robust and Safe Text-to-Image Models

Authors :
Park, Yong-Hyun
Yun, Sangdoo
Kim, Jin-Hwa
Kim, Junho
Jang, Geonhui
Jeong, Yonghyun
Jo, Junghyo
Lee, Gayoung
Publication Year :
2024

Abstract

Recent advancements in text-to-image (T2I) models have greatly benefited from large-scale datasets, but they also pose significant risks due to the potential generation of unsafe content. To mitigate this issue, researchers have developed unlearning techniques to remove the model's ability to generate potentially harmful content. However, these methods are easily bypassed by adversarial attacks, making them unreliable for ensuring the safety of generated images. In this paper, we propose Direct Unlearning Optimization (DUO), a novel framework for removing Not Safe For Work (NSFW) content from T2I models while preserving their performance on unrelated topics. DUO employs a preference optimization approach using curated paired image data, ensuring that the model learns to remove unsafe visual concepts while retaining unrelated features. Furthermore, we introduce an output-preserving regularization term to maintain the model's generative capabilities on safe content. Extensive experiments demonstrate that DUO can robustly defend against various state-of-the-art red teaming methods without significant performance degradation on unrelated topics, as measured by FID and CLIP scores. Our work contributes to the development of safer and more reliable T2I models, paving the way for their responsible deployment in both closed-source and open-source scenarios.<br />Comment: Extended abstract accepted in GenLaw 2024 workshop @ ICML2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.21035
Document Type :
Working Paper