Back to Search Start Over

Enhancing Data Privacy in Large Language Models through Private Association Editing

Authors :
Venditti, Davide
Ruzzetti, Elena Sofia
Xompero, Giancarlo A.
Giannone, Cristina
Favalli, Andrea
Romagnoli, Raniero
Zanzotto, Fabio Massimo
Publication Year :
2024

Abstract

Large language models (LLMs) require a significant redesign in solutions to preserve privacy in data-intensive applications due to their text-generation capabilities. Indeed, LLMs tend to memorize and emit private information when maliciously prompted. In this paper, we introduce Private Association Editing (PAE) as a novel defense approach for private data leakage. PAE is designed to effectively remove Personally Identifiable Information (PII) without retraining the model. Experimental results demonstrate the effectiveness of PAE with respect to alternative baseline methods. We believe PAE will serve as a critical tool in the ongoing effort to protect data privacy in LLMs, encouraging the development of safer models for real-world applications.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.18221
Document Type :
Working Paper