1. Preventing DNN Model IP Theft via Hardware Obfuscation
- Author
-
Victor C. Ferreira, Vinay C. Patil, Felipe M. G. França, Brunno F. Goldstein, Sandip Kundu, and Alexandre S. Nery
- Subjects
010302 applied physics ,business.industry ,Computer science ,Deep learning ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Public key infrastructure ,Cryptography ,010501 environmental sciences ,Encryption ,Trusted system ,01 natural sciences ,Data modeling ,Obfuscation (software) ,Embedded system ,0103 physical sciences ,Hardware obfuscation ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,0105 earth and related environmental sciences - Abstract
Training accurate deep learning (DL) models require large amounts of training data, significant work in labeling the data, considerable computing resources, and substantial domain expertise. In short, they are expensive to develop. Hence, protecting these models, which are valuable storehouses of intellectual properties (IP), against model stealing/cloning attacks is of paramount importance. Today’s mobile processors feature Neural Processing Units (NPUs) to accelerate the execution of DL models. DL models executing on NPUs are vulnerable to hyperparameter extraction via side-channel attacks and model parameter theft via bus monitoring attacks. This paper presents a novel solution to defend against DL IP theft in NPUs during model distribution and deployment/execution via lightweight, keyed model obfuscation scheme. Unauthorized use of such models results in inaccurate classification. In addition, we present an ideal end-to-end deep learning trusted system composed of: 1) model distribution via hardware root-of-trust and public-key cryptography infrastructure (PKI) and 2) model execution via low-latency memory encryption. We demonstrate that our proposed obfuscation solution achieves IP protection objectives without requiring specialized training or sacrificing the model’s accuracy. In addition, the proposed obfuscation mechanism preserves the output class distribution while degrading the model’s accuracy for unauthorized parties, covering any evidence of a hacked model.
- Published
- 2021
- Full Text
- View/download PDF