Back to Search Start Over

Photonic Differential Privacy with Direct Feedback Alignment

Authors :
Ohana, Ruben
Ruiz, Hamlet J. Medina
Launay, Julien
Cappelli, Alessandro
Poli, Iacopo
Ralaivola, Liva
Rakotomamonjy, Alain
Source :
NeurIPS 2021
Publication Year :
2021

Abstract

Optical Processing Units (OPUs) -- low-power photonic chips dedicated to large scale random projections -- have been used in previous work to train deep neural networks using Direct Feedback Alignment (DFA), an effective alternative to backpropagation. Here, we demonstrate how to leverage the intrinsic noise of optical random projections to build a differentially private DFA mechanism, making OPUs a solution of choice to provide a private-by-design training. We provide a theoretical analysis of our adaptive privacy mechanism, carefully measuring how the noise of optical random projections propagates in the process and gives rise to provable Differential Privacy. Finally, we conduct experiments demonstrating the ability of our learning procedure to achieve solid end-task performance.

Details

Database :
arXiv
Journal :
NeurIPS 2021
Publication Type :
Report
Accession number :
edsarx.2106.03645
Document Type :
Working Paper