5 results on '"secure neural network inference"'
Search Results
2. Secure Neural Network Inference for Edge Intelligence: Implications of Bandwidth and Energy Constraints
- Author
-
Prins, Jorit, Mann, Zoltán Ádám, Fortino, Giancarlo, Series Editor, Liotta, Antonio, Series Editor, Pal, Souvik, editor, Savaglio, Claudio, editor, Minerva, Roberto, editor, and Delicato, Flávia C., editor
- Published
- 2024
- Full Text
- View/download PDF
3. HeFUN: Homomorphic Encryption for Unconstrained Secure Neural Network Inference.
- Author
-
Nguyen, Duy Tung Khanh, Duong, Dung Hoang, Susilo, Willy, Chow, Yang-Wai, and Ta, The Anh
- Subjects
ARTIFICIAL neural networks ,CONVOLUTIONAL neural networks ,NONLINEAR functions - Abstract
Homomorphic encryption (HE) has emerged as a pivotal technology for secure neural network inference (SNNI), offering privacy-preserving computations on encrypted data. Despite active developments in this field, HE-based SNNI frameworks are impeded by three inherent limitations. Firstly, they cannot evaluate non-linear functions such as ReLU , the most widely adopted activation function in neural networks. Secondly, the permitted number of homomorphic operations on ciphertexts is bounded, consequently limiting the depth of neural networks that can be evaluated. Thirdly, the computational overhead associated with HE is prohibitively high, particularly for deep neural networks. In this paper, we introduce a novel paradigm designed to address the three limitations of HE-based SNNI. Our approach is an interactive approach that is solely based on HE, called iLHE. Utilizing the idea of iLHE, we present two protocols: ReLU , which facilitates the direct evaluation of the ReLU function on encrypted data, tackling the first limitation, and HeRefresh , which extends the feasible depth of neural network computations and mitigates the computational overhead, thereby addressing the second and third limitations. Based on HeReLU and HeRefresh protocols, we build a new framework for SNNI, named HeFUN. We prove that our protocols and the HeFUN framework are secure in the semi-honest security model. Empirical evaluations demonstrate that HeFUN surpasses current HE-based SNNI frameworks in multiple aspects, including security, accuracy, the number of communication rounds, and inference latency. Specifically, for a convolutional neural network with four layers on the MNIST dataset, HeFUN achieves 99.16 % accuracy with an inference latency of 1.501 s, surpassing the popular HE-based framework CryptoNets proposed by Gilad-Bachrach, which achieves 98.52 % accuracy with an inference latency of 3.479 s. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. HeFUN: Homomorphic Encryption for Unconstrained Secure Neural Network Inference
- Author
-
Duy Tung Khanh Nguyen, Dung Hoang Duong, Willy Susilo, Yang-Wai Chow, and The Anh Ta
- Subjects
privacy-preserving machine learning ,secure neural network inference ,homomorphic encryption ,Information technology ,T58.5-58.64 - Abstract
Homomorphic encryption (HE) has emerged as a pivotal technology for secure neural network inference (SNNI), offering privacy-preserving computations on encrypted data. Despite active developments in this field, HE-based SNNI frameworks are impeded by three inherent limitations. Firstly, they cannot evaluate non-linear functions such as ReLU, the most widely adopted activation function in neural networks. Secondly, the permitted number of homomorphic operations on ciphertexts is bounded, consequently limiting the depth of neural networks that can be evaluated. Thirdly, the computational overhead associated with HE is prohibitively high, particularly for deep neural networks. In this paper, we introduce a novel paradigm designed to address the three limitations of HE-based SNNI. Our approach is an interactive approach that is solely based on HE, called iLHE. Utilizing the idea of iLHE, we present two protocols: ReLU, which facilitates the direct evaluation of the ReLU function on encrypted data, tackling the first limitation, and HeRefresh, which extends the feasible depth of neural network computations and mitigates the computational overhead, thereby addressing the second and third limitations. Based on HeReLU and HeRefresh protocols, we build a new framework for SNNI, named HeFUN. We prove that our protocols and the HeFUN framework are secure in the semi-honest security model. Empirical evaluations demonstrate that HeFUN surpasses current HE-based SNNI frameworks in multiple aspects, including security, accuracy, the number of communication rounds, and inference latency. Specifically, for a convolutional neural network with four layers on the MNIST dataset, HeFUN achieves 99.16% accuracy with an inference latency of 1.501 s, surpassing the popular HE-based framework CryptoNets proposed by Gilad-Bachrach, which achieves 98.52% accuracy with an inference latency of 3.479 s.
- Published
- 2023
- Full Text
- View/download PDF
5. B-LNN: Inference-time linear model for secure neural network inference.
- Author
-
Wang, Qizheng, Ma, Wenping, and Wang, Weiwei
- Subjects
- *
INFERENCE (Logic) , *FEATURE extraction , *SERVICE learning , *MACHINE learning - Abstract
Machine Learning as a Service (MLaaS) provides clients with well-trained neural networks for predicting private data. Conventional prediction processes of MLaaS require clients to send sensitive inputs to the server, or proprietary models must be stored on the client-side device. The former reveals client privacy, while the latter harms the interests of model providers. Existing works on privacy-preserving MLaaS introduce cryptographic primitives to allow two parties to perform neural network inference without revealing either party's data. However, nonlinear activation functions bring high computational overhead and response delays to the inference process of these schemes. In this paper, we analyze the mechanism by which activation functions enhance model expressivity, and design an activation function S - cos that is friendly to secure neural network inference. Our proposed S - cos can be re-parameterized into a linear layer during the inference phase. Further, we propose an inference-time linear model called Beyond Linear Neural Network (B-LNN) equipped with S - cos , which exhibits promising performance on several benchmark datasets. • We argue that the activation functions introduce an inductive bias to the learning process, and the design toward cosine similarity makes ReLU and its variants superior to value-oriented polynomial activation functions. We further propose a novel cosine similarity-oriented activation function, called S - cos. • We analyze the advantages of deep features over shallow ones and introduce a random feature extraction module to improve the performance of models containing a single activation layer. • We design a Beyond Linear Neural Network (B-LNN), which is equipped with our proposed S - cos and random feature extraction module. • We design a secure neural network inference framework with Arithmetic Secret Sharing (A-SS) by taking advantage of the inference-time linearity of B-LNN. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.