Back to Search Start Over

Using Anonymous Protocol for Privacy Preserving Deep Learning Model

Authors :
Anh Tu Tran
The Dung Luong
Viet Hung Dang
Van Nam Huynh
Source :
2020 7th NAFOSTED Conference on Information and Computer Science (NICS).
Publication Year :
2020
Publisher :
IEEE, 2020.

Abstract

Deep learning is an effective approach to many real-world problems. The effectiveness of deep learning models depends largely on the amount of data being used to train the model. However, these data are often private or sensitive, which make it challenging to collect and apply deep learning models in practice. In this paper, we introduce an anonymous deep neural network training protocol called ATP (Anonymous Training Protocol), in which each party owns a private dataset and collectively trains a global model without any data leakage to other parties. To achieve this, we use the technique of sharing random gradients with large aggregate mini-batch sizes combined with the addition of temporary random noise. These random noises will then be sent back through an anonymous network to be filtered out during the update phase of the aggregate server. The proposed ATP model allows protection of the shared gradients even when the aggregating server colludes with other n-2 participants. We evaluate the model on the MNIST dataset with the CNN network architecture, resulting in an accuracy of 98.09%. The results show that the proposed ATP model has high practical applicability in protecting privacy in deep learning.

Details

Database :
OpenAIRE
Journal :
2020 7th NAFOSTED Conference on Information and Computer Science (NICS)
Accession number :
edsair.doi...........c3424fbe0af3d476f355758a56340021