Back to Search
Start Over
Improving Memory Utilization in Convolutional Neural Network Accelerators
- Publication Year :
- 2020
-
Abstract
- While the accuracy of convolutional neural networks (CNNs) has achieved vast improvements by introducing larger and deeper network architectures, also the memory footprint for storing their parameters and activations has increased. This trend especially challenges power- and resource-limited accelerator designs, which are often restricted to store all network data in on-chip memory to avoid interfacing energy-hungry external memories. Maximizing the network size that fits on a given accelerator thus requires to maximize its memory utilization. While the traditionally used ping-pong buffering technique is mapping subsequent activation layers to disjunctive memory regions, we propose a mapping method that allows these regions to overlap and thus utilize the memory more efficiently. This letter presents the mathematical model to compute the maximum activations memory overlap and thus the lower bound of on-chip memory needed to perform layer-by-layer processing of CNNs on memory-limited accelerators. Our experiments with various real-world object detector networks show that the proposed mapping technique can decrease the activations memory by up to 32.9%, reducing the overall memory for the entire network by up to 23.9% compared to traditional ping-pong buffering. For higher resolution denoising networks, we achieve activation memory savings of 48.8%. Additionally, we implement a face detector network on a field-programmable gate array-based camera to validate these memory savings on a complete end-to-end system.
- Subjects :
- Signal Processing (eess.SP)
FOS: Computer and information sciences
Computer Science - Machine Learning
General Computer Science
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Convolutional neural network
memory requirements
Machine Learning (cs.LG)
lower bound
Gate array
FOS: Electrical engineering, electronic engineering, information engineering
System on a chip
Neural and Evolutionary Computing (cs.NE)
Electrical Engineering and Systems Science - Signal Processing
hardware accelerator
Network architecture
Hardware_MEMORYSTRUCTURES
Image and Video Processing (eess.IV)
Detector
Computer Science - Neural and Evolutionary Computing
Electrical Engineering and Systems Science - Image and Video Processing
Memory management
Computer engineering
Control and Systems Engineering
Interfacing
Memory footprint
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....fc0df18b816630d5b1d2acac7a59ebda