401. TDPP: Two-Dimensional Permutation-Based Protection of Memristive Deep Neural Networks
- Author
-
Zou, Minhui, Zhu, Zhenhua, Greenberg-Toledo, Tzofnat, Leitersdorf, Orian, Li, Jiang, Zhou, Junlong, Wang, Yu, Du, Nan, Kvatinsky, Shahar, Zou, Minhui, Zhu, Zhenhua, Greenberg-Toledo, Tzofnat, Leitersdorf, Orian, Li, Jiang, Zhou, Junlong, Wang, Yu, Du, Nan, and Kvatinsky, Shahar
- Abstract
The execution of deep neural network (DNN) algorithms suffers from significant bottlenecks due to the separation of the processing and memory units in traditional computer systems. Emerging memristive computing systems introduce an in situ approach that overcomes this bottleneck. The non-volatility of memristive devices, however, may expose the DNN weights stored in memristive crossbars to potential theft attacks. Therefore, this paper proposes a two-dimensional permutation-based protection (TDPP) method that thwarts such attacks. We first introduce the underlying concept that motivates the TDPP method: permuting both the rows and columns of the DNN weight matrices. This contrasts with previous methods, which focused solely on permuting a single dimension of the weight matrices, either the rows or columns. While it's possible for an adversary to access the matrix values, the original arrangement of rows and columns in the matrices remains concealed. As a result, the extracted DNN model from the accessed matrix values would fail to operate correctly. We consider two different memristive computing systems (designed for layer-by-layer and layer-parallel processing, respectively) and demonstrate the design of the TDPP method that could be embedded into the two systems. Finally, we present a security analysis. Our experiments demonstrate that TDPP can achieve comparable effectiveness to prior approaches, with a high level of security when appropriately parameterized. In addition, TDPP is more scalable than previous methods and results in reduced area and power overheads. The area and power are reduced by, respectively, 1218$\times$ and 2815$\times$ for the layer-by-layer system and by 178$\times$ and 203$\times$ for the layer-parallel system compared to prior works., Comment: 14 pages, 11 figures
- Published
- 2023
- Full Text
- View/download PDF