1. Secure aggregation for federated learning in flower
- Author
-
Nicholas D. Lane, Daniel J. Beutel, Kwing Hei Li, and Pedro Porto Buarque de Gusmão
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,0303 health sciences ,Computer Science - Cryptography and Security ,Computer science ,Distributed computing ,Computation ,02 engineering and technology ,Python (programming language) ,Federated learning ,Machine Learning (cs.LG) ,03 medical and health sciences ,020204 information systems ,Threat model ,0202 electrical engineering, electronic engineering, information engineering ,Secure multi-party computation ,Cryptography and Security (cs.CR) ,Private information retrieval ,computer ,Implementation ,030304 developmental biology ,computer.programming_language ,Vulnerability (computing) - Abstract
Federated Learning (FL) allows parties to learn a shared prediction model by delegating the training computation to clients and aggregating all the separately trained models on the server. To prevent private information being inferred from local models, Secure Aggregation (SA) protocols are used to ensure that the server is unable to inspect individual trained models as it aggregates them. However, current implementations of SA in FL frameworks have limitations, including vulnerability to client dropouts or configuration difficulties. In this paper, we present Salvia, an implementation of SA for Python users in the Flower FL framework. Based on the SecAgg(+) protocols for a semi-honest threat model, Salvia is robust against client dropouts and exposes a flexible and easy-to-use API that is compatible with various machine learning frameworks. We show that Salvia's experimental performance is consistent with SecAgg(+)'s theoretical computation and communication complexities., Comment: Accepted to appear in the 2nd International Workshop on Distributed Machine Learning
- Published
- 2021