Back to Search Start Over

A Quantization-based Technique for Privacy Preserving Distributed Learning

Authors :
Colombo, Maurizio
Asal, Rasool
Damiani, Ernesto
AlQassem, Lamees Mahmoud
Almemari, Al Anoud
Alhammadi, Yousof
Publication Year :
2024

Abstract

The massive deployment of Machine Learning (ML) models raises serious concerns about data protection. Privacy-enhancing technologies (PETs) offer a promising first step, but hard challenges persist in achieving confidentiality and differential privacy in distributed learning. In this paper, we describe a novel, regulation-compliant data protection technique for the distributed training of ML models, applicable throughout the ML life cycle regardless of the underlying ML architecture. Designed from the data owner's perspective, our method protects both training data and ML model parameters by employing a protocol based on a quantized multi-hash data representation Hash-Comb combined with randomization. The hyper-parameters of our scheme can be shared using standard Secure Multi-Party computation protocols. Our experimental results demonstrate the robustness and accuracy-preserving properties of our approach.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.19418
Document Type :
Working Paper