1. Federated split GANs
- Author
-
Kortoçi, Pranvera, Liang, Yilei, Zhou, Pengyuan, Lee, Lik-Hang, Mehrabi, Abbas, Hui, Pan, Tarkoma, Sasu, Crowcroft, Jon, Kortoçi, Pranvera, Liang, Yilei, Zhou, Pengyuan, Lee, Lik-Hang, Mehrabi, Abbas, Hui, Pan, Tarkoma, Sasu, and Crowcroft, Jon
- Abstract
Mobile devices and the immense amount and variety of data they generate are key enablers of machine learning (ML)-based applications. Traditional ML techniques have shifted toward new paradigms such as federated learning (FL) and split learning (SL) to improve the protection of user's data privacy. However, SL often relies on server(s) located in the edge or cloud to train computationally-heavy parts of an ML model to avoid draining the limited resource on client devices, potentially resulting in exposure of device data to such third parties. This work proposes an alternative approach to train computationally heavy ML models in user's devices themselves, where corresponding device data resides. Specifically, we focus on GANs (generative adversarial networks) and leverage their network architecture to preserve data privacy. We train the discriminative part of a GAN on user's devices with their data, whereas the generative model is trained remotely (e.g., server) for which there is no need to access device true data. Moreover, our approach ensures that the computational load of training the discriminative model is shared among user's devices - proportional to their computation capabilities - by means of SL. We implement our proposed collaborative training scheme of a computationally-heavy GAN model in simulated resource-constrained devices. The results show that our system preserves data privacy, keeps a short training time, and yields the same model accuracy as when the model is trained on devices with unconstrained resources (e.g., cloud). © 2022 ACM.
- Published
- 2022