Back to Search
Start Over
Towards Efficient and Scalable Training of Differentially Private Deep Learning
- Publication Year :
- 2024
-
Abstract
- Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for training machine learning models under differential privacy (DP). The most common DP-SGD privacy accountants rely on Poisson subsampling for ensuring the theoretical DP guarantees. Implementing computationally efficient DP-SGD with Poisson subsampling is not trivial, which leads to many implementations ignoring this requirement. We conduct a comprehensive empirical study to quantify the computational cost of training deep learning models under DP given the requirement of Poisson subsampling, by re-implementing efficient methods using Poisson subsampling and benchmarking them. We find that using the naive implementation DP-SGD with Opacus in PyTorch has between 2.6 and 8 times lower throughput of processed training examples per second than SGD. However, efficient gradient clipping implementations with e.g. Ghost Clipping can roughly halve this cost. We propose alternative computationally efficient ways of implementing DP-SGD with JAX that are using Poisson subsampling and achieve only around 1.2 times lower throughput than SGD based on PyTorch. We highlight important implementation considerations with JAX. Finally, we study the scaling behaviour using up to 80 GPUs and find that DP-SGD scales better than SGD. We share our re-implementations using Poisson subsampling at https://github.com/DPBayes/Towards-Efficient-Scalable-Training-DP-DL.<br />Comment: 17 pages, 12 figures
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2406.17298
- Document Type :
- Working Paper