1. Memory-Centric Accelerator Design for Convolutional Neural Networks
- Author
-
Arnaud Arindra Adiyoso Setio, Maurice Peemen, Bart Mesman, Henk Corporaal, and Electronic Systems
- Subjects
Virtex ,Memory hierarchy ,business.industry ,Computer science ,Locality ,Cache-only memory architecture ,Memory bandwidth ,02 engineering and technology ,020202 computer hardware & architecture ,Memory management ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,Interleaved memory ,020201 artificial intelligence & image processing ,Computing with Memory ,business ,Auxiliary memory - Abstract
In the near future, cameras will be used everywhere as flexible sensors for numerous applications. For mobility and privacy reasons, the required image processing should be local on embedded computer platforms with performance requirements and energy constraints. Dedicated acceleration of Convolutional Neural Networks (CNN) can achieve these targets with enough flexibility to perform multiple vision tasks. A challenging problem for the design of efficient accelerators is the limited amount of external memory bandwidth. We show that the effects of the memory bottleneck can be reduced by a flexible memory hierarchy that supports the complex data access patterns in CNN workload. The efficiency of the on-chip memories is maximized by our scheduler that uses tiling to optimize for data locality. Our design flow ensures that on-chip memory size is minimized, which reduces area and energy usage. The design flow is evaluated by a High Level Synthesis implementation on a Virtex 6 FPGA board. Compared to accelerators with standard scratchpad memories the FPGA resources can be reduced up to 13× while maintaining the same performance. Alternatively, when the same amount of FPGA resources is used our accelerators are up to 11× faster.
- Published
- 2013