Back to Search Start Over

PrivColl: Practical Privacy-Preserving Collaborative Machine Learning

Authors :
Zhang, Yanjun
Bai, Guangdong
Li, Xue
Curtis, Caitlin
Chen, Chen
Ko, Ryan K L
Publication Year :
2020

Abstract

Collaborative learning enables two or more participants, each with their own training dataset, to collaboratively learn a joint model. It is desirable that the collaboration should not cause the disclosure of either the raw datasets of each individual owner or the local model parameters trained on them. This privacy-preservation requirement has been approached through differential privacy mechanisms, homomorphic encryption (HE) and secure multiparty computation (MPC), but existing attempts may either introduce the loss of model accuracy or imply significant computational and/or communicational overhead. In this work, we address this problem with the lightweight additive secret sharing technique. We propose PrivColl, a framework for protecting local data and local models while ensuring the correctness of training processes. PrivColl employs secret sharing technique for securely evaluating addition operations in a multiparty computation environment, and achieves practicability by employing only the homomorphic addition operations. We formally prove that it guarantees privacy preservation even though the majority (n-2 out of n) of participants are corrupted. With experiments on real-world datasets, we further demonstrate that PrivColl retains high efficiency. It achieves a speedup of more than 45X over the state-of-the-art MPC/HE based schemes for training linear/logistic regression, and 216X faster for training neural network.<br />Comment: 20 pages, 3 figures, to be published in 25th European Symposium on Research in Computer Security (ESORICS) 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2007.06953
Document Type :
Working Paper