Back to Search
Start Over
Rewind-to-Delete: Certified Machine Unlearning for Nonconvex Functions
- Publication Year :
- 2024
-
Abstract
- Machine unlearning algorithms aim to efficiently remove data from a model without retraining it from scratch, in order to enforce data privacy, remove corrupted or outdated data, or respect a user's ``right to be forgotten." Certified machine unlearning is a strong theoretical guarantee that quantifies the extent to which data is erased from the model weights. Most prior works in certified unlearning focus on models trained on convex or strongly convex loss functions, which benefit from convenient convergence guarantees and the existence of global minima. For nonconvex objectives, existing algorithms rely on limiting assumptions and expensive computations that hinder practical implementations. In this work, we propose a simple first-order algorithm for unlearning on general nonconvex loss functions which unlearns by ``rewinding" to an earlier step during the learning process and then performs gradient descent on the loss function of the retained data points. Our algorithm is black-box, in that it can be directly applied to models pretrained with vanilla gradient descent with no prior consideration of unlearning. We prove $(\epsilon, \delta)$ certified unlearning and performance guarantees that establish the privacy-utility-complexity tradeoff of our algorithm, with special consideration for nonconvex functions that satisfy the Polyak-Lojasiewicz inequality.
- Subjects :
- Computer Science - Machine Learning
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2409.09778
- Document Type :
- Working Paper