Back to Search
Start Over
Primer: Fast Private Transformer Inference on Encrypted Data
- Source :
- 2023 Design Automation Conference
- Publication Year :
- 2023
-
Abstract
- It is increasingly important to enable privacy-preserving inference for cloud services based on Transformers. Post-quantum cryptographic techniques, e.g., fully homomorphic encryption (FHE), and multi-party computation (MPC), are popular methods to support private Transformer inference. However, existing works still suffer from prohibitively computational and communicational overhead. In this work, we present, Primer, to enable a fast and accurate Transformer over encrypted data for natural language processing tasks. In particular, Primer is constructed by a hybrid cryptographic protocol optimized for attention-based Transformer models, as well as techniques including computation merge and tokens-first ciphertext packing. Comprehensive experiments on encrypted language modeling show that Primer achieves state-of-the-art accuracy and reduces the inference latency by 90.6% ~ 97.5% over previous methods.<br />Comment: 6 pages, 6 figures, 3 tables
- Subjects :
- Computer Science - Cryptography and Security
Subjects
Details
- Database :
- arXiv
- Journal :
- 2023 Design Automation Conference
- Publication Type :
- Report
- Accession number :
- edsarx.2303.13679
- Document Type :
- Working Paper