1. An Efficient Parallel Secure Machine Learning Framework on GPUs
- Author
-
Amelie Chi Zhou, Feng Zhang, Zheng Chen, Chenyang Zhang, Jidong Zhai, and Xiaoyong Du
- Subjects
020203 distributed computing ,Data processing ,Speedup ,business.industry ,Computer science ,Computation ,Pipeline (computing) ,Cloud computing ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computational Theory and Mathematics ,Hardware and Architecture ,Server ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,business ,Performance metric ,computer ,Data transmission - Abstract
Machine learning is widely used in our daily lives. Large amounts of data have been continuously produced and transmitted to the cloud for model training and data processing, which raises a problem: how to preserve the security of the data. Recently, a secure machine learning system named SecureML has been proposed to solve this issue using two-party computation. However, due to the excessive computation expenses of two-party computation, the secure machine learning is about 2× slower than the original machine learning methods. Previous work on secure machine learning mostly focused on novel protocols or improving accuracy, while the performance metric has been ignored. In this article, we propose a GPU-based framework ParSecureML to improve the performance of secure machine learning algorithms based on two-party computation. The main challenges of developing ParSecureML lie in the complex computation patterns, frequent intra-node data transmission between CPU and GPU, and complicated inter-node data dependence. To handle these challenges, we propose a series of novel solutions, including profiling-guided adaptive GPU utilization, fine-grained double pipeline for intra-node CPU-GPU cooperation, and compressed transmission for inter-node communication. Moreover, we integrate architecture specific optimizations, such as Tensor Cores, into ParSecureML. As far as we know, this is the first GPU-based secure machine learning framework. Compared to the state-of-the-art framework, ParSecureML achieves an average of 33.8× speedup. ParSecureML can also be applied to inferences, which achieves 31.7× speedup on average.
- Published
- 2021