1. Fine-Tuning Language Models with Differential Privacy through Adaptive Noise Allocation
- Author
-
Li, Xianzhi, Zmigrod, Ran, Ma, Zhiqiang, Liu, Xiaomo, and Zhu, Xiaodan
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Language models are capable of memorizing detailed patterns and information, leading to a double-edged effect: they achieve impressive modeling performance on downstream tasks with the stored knowledge but also raise significant privacy concerns. Traditional differential privacy based training approaches offer robust safeguards by employing a uniform noise distribution across all parameters. However, this overlooks the distinct sensitivities and contributions of individual parameters in privacy protection and often results in suboptimal models. To address these limitations, we propose ANADP, a novel algorithm that adaptively allocates additive noise based on the importance of model parameters. We demonstrate that ANADP narrows the performance gap between regular fine-tuning and traditional DP fine-tuning on a series of datasets while maintaining the required privacy constraints., Comment: EMNLP 2024 findings
- Published
- 2024