Back to Search Start Over

Memory-Efficient and Secure DNN Inference on TrustZone-enabled Consumer IoT Devices

Authors :
Xie, Xueshuo
Wang, Haoxu
Jian, Zhaolong
Li, Tao
Wang, Wei
Xu, Zhiwei
Wang, Guiling
Xie, Xueshuo
Wang, Haoxu
Jian, Zhaolong
Li, Tao
Wang, Wei
Xu, Zhiwei
Wang, Guiling
Publication Year :
2024

Abstract

Edge intelligence enables resource-demanding Deep Neural Network (DNN) inference without transferring original data, addressing concerns about data privacy in consumer Internet of Things (IoT) devices. For privacy-sensitive applications, deploying models in hardware-isolated trusted execution environments (TEEs) becomes essential. However, the limited secure memory in TEEs poses challenges for deploying DNN inference, and alternative techniques like model partitioning and offloading introduce performance degradation and security issues. In this paper, we present a novel approach for advanced model deployment in TrustZone that ensures comprehensive privacy preservation during model inference. We design a memory-efficient management method to support memory-demanding inference in TEEs. By adjusting the memory priority, we effectively mitigate memory leakage risks and memory overlap conflicts, resulting in 32 lines of code alterations in the trusted operating system. Additionally, we leverage two tiny libraries: S-Tinylib (2,538 LoCs), a tiny deep learning library, and Tinylibm (827 LoCs), a tiny math library, to support efficient inference in TEEs. We implemented a prototype on Raspberry Pi 3B+ and evaluated it using three well-known lightweight DNN models. The experimental results demonstrate that our design significantly improves inference speed by 3.13 times and reduces power consumption by over 66.5% compared to non-memory optimization method in TEEs.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438537962
Document Type :
Electronic Resource