Back to Search Start Over

A Distributed Trust Framework for Privacy-Preserving Machine Learning

Authors :
Abramson, Will
Hall, Adam James
Papadopoulos, Pavlos
Pitropakis, Nikolaos
Buchanan, William J
Publication Year :
2020
Publisher :
arXiv, 2020.

Abstract

When training a machine learning model, it is standard procedure for the researcher to have full knowledge of both the data and model. However, this engenders a lack of trust between data owners and data scientists. Data owners are justifiably reluctant to relinquish control of private information to third parties. Privacy-preserving techniques distribute computation in order to ensure that data remains in the control of the owner while learning takes place. However, architectures distributed amongst multiple agents introduce an entirely new set of security and trust complications. These include data poisoning and model theft. This paper outlines a distributed infrastructure which is used to facilitate peer-to-peer trust between distributed agents; collaboratively performing a privacy-preserving workflow. Our outlined prototype sets industry gatekeepers and governance bodies as credential issuers. Before participating in the distributed learning workflow, malicious actors must first negotiate valid credentials. We detail a proof of concept using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs) to establish a distributed trust architecture during a privacy-preserving machine learning experiment. Specifically, we utilise secure and authenticated DID communication channels in order to facilitate a federated learning workflow related to mental health care data.<br />Comment: To be published in the proceedings of the 17th International Conference on Trust, Privacy and Security in Digital Business - TrustBus2020

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....1494d55d479a9c79ef8b37321ca4391e
Full Text :
https://doi.org/10.48550/arxiv.2006.02456