Back to Search Start Over

GPU-based Private Information Retrieval for On-Device Machine Learning Inference

Authors :
Lam, Maximilian
Johnson, Jeff
Xiong, Wenjie
Maeng, Kiwan
Gupta, Udit
Li, Yang
Lai, Liangzhen
Leontiadis, Ilias
Rhu, Minsoo
Lee, Hsien-Hsin S.
Reddi, Vijay Janapa
Wei, Gu-Yeon
Brooks, David
Suh, G. Edward
Publication Year :
2023

Abstract

On-device machine learning (ML) inference can enable the use of private user data on user devices without revealing them to remote servers. However, a pure on-device solution to private ML inference is impractical for many applications that rely on embedding tables that are too large to be stored on-device. In particular, recommendation models typically use multiple embedding tables each on the order of 1-10 GBs of data, making them impractical to store on-device. To overcome this barrier, we propose the use of private information retrieval (PIR) to efficiently and privately retrieve embeddings from servers without sharing any private information. As off-the-shelf PIR algorithms are usually too computationally intensive to directly use for latency-sensitive inference tasks, we 1) propose novel GPU-based acceleration of PIR, and 2) co-design PIR with the downstream ML application to obtain further speedup. Our GPU acceleration strategy improves system throughput by more than $20 \times$ over an optimized CPU PIR implementation, and our PIR-ML co-design provides an over $5 \times$ additional throughput improvement at fixed model quality. Together, for various on-device ML applications such as recommendation and language modeling, our system on a single V100 GPU can serve up to $100,000$ queries per second -- a $>100 \times$ throughput improvement over a CPU-based baseline -- while maintaining model accuracy.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2301.10904
Document Type :
Working Paper