Back to Search Start Over

Toward Efficient and Adaptive Design of Video Detection System with Deep Neural Networks.

Authors :
JIACHEN MAO
QING YANG
ANG LI
NIXON, KENT W.
HAI LI
YIRAN CHEN
Source :
ACM Transactions on Embedded Computing Systems; May2022, Vol. 21 Issue 3, p1-21, 21p
Publication Year :
2022

Abstract

In the past decade, Deep Neural Networks (DNNs), e.g., Convolutional Neural Networks, achieved human-level performance in vision tasks such as object classification and detection. However, DNNs are known to be computationally expensive and thus hard to be deployed in real-time and edge applications. Many previous works have focused on DNN model compression to obtain smaller parameter sizes and consequently, less computational cost. Such methods, however, often introduce noticeable accuracy degradation. In this work, we optimize a state-of-the-art DNN-based video detection framework—Deep Feature Flow (DFF) from the cloud end using three proposed ideas. First, we propose Asynchronous DFF (ADFF) to asynchronously execute the neural networks. Second, we propose a Video-based Dynamic Scheduling (VDS) method that decides the detection frequency based on the magnitude of movement between video frames. Last,we propose Spatial Sparsity Inference, which only performs the inference on part of the video frame and thus reduces the computation cost. According to our experimental results, ADFF can reduce the bottleneck latency from 89 to 19 ms. VDS increases the detection accuracy by 0.6% mAP without increasing computation cost. And SSI further saves 0.2 ms with a 0.6% mAP degradation of detection accuracy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15399087
Volume :
21
Issue :
3
Database :
Complementary Index
Journal :
ACM Transactions on Embedded Computing Systems
Publication Type :
Academic Journal
Accession number :
158089062
Full Text :
https://doi.org/10.1145/3484946