Back to Search Start Over

Certification for Differentially Private Prediction in Gradient-Based Training

Authors :
Wicker, Matthew
Sosnin, Philip
Shilov, Igor
Janik, Adrianna
Müller, Mark N.
de Montjoye, Yves-Alexandre
Weller, Adrian
Tsay, Calvin
Publication Year :
2024

Abstract

Differential privacy upper-bounds the information leakage of machine learning models, yet providing meaningful privacy guarantees has proven to be challenging in practice. The private prediction setting where model outputs are privatized is being investigated as an alternate way to provide formal guarantees at prediction time. Most current private prediction algorithms, however, rely on global sensitivity for noise calibration, which often results in large amounts of noise being added to the predictions. Data-specific noise calibration, such as smooth sensitivity, could significantly reduce the amount of noise added, but were so far infeasible to compute exactly for modern machine learning models. In this work we provide a novel and practical approach based on convex relaxation and bound propagation to compute a provable upper-bound for the local and smooth sensitivity of a prediction. This bound allows us to reduce the magnitude of noise added or improve privacy accounting in the private prediction setting. We validate our framework on datasets from financial services, medical image classification, and natural language processing and across models and find our approach to reduce the noise added by up to order of magnitude.<br />Comment: 16 pages, 11 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.13433
Document Type :
Working Paper