Back to Search
Start Over
Partitioning Compute Units in CNN Acceleration for Statistical Memory Traffic Shaping.
- Source :
- IEEE Computer Architecture Letters; Jan/Jun2018, Vol. 17 Issue 1, p72-75, 4p
- Publication Year :
- 2018
-
Abstract
- Convolutional Neural Networks (CNNs) have become the default choice for processing visual information, and the design complexity of CNNs has been steadily increasing to improve accuracy. To cope with the massive amount of computation needed for such complex CNNs, the latest solutions utilize blocking of an image over the available dimensions (e.g., horizontal, vertical, channel, and kernel) and batching of multiple input images to improve data reuse in the memory hierarchy. While there has been a large collection of works on maximizing data reuse, only a few studies have focused on the memory bottleneck problem caused by limited bandwidth. Bandwidth bottleneck can easily occur in CNN acceleration as CNN layers have different sizes with varying computation needs and as batching is typically performed over each layer of CNN for an ideal data reuse. In this case, the data transfer demand for a layer can be relatively low or high compared to the computation requirement of the layer, and therefore temporal fluctuations in memory access can be induced eventually causing bandwidth problems. In this paper, we first show that there exists a high degree of fluctuation in memory access to computation ratio depending on CNN layers and functions in the layer being processed by the compute units (cores), where the compute units are tightly synchronized to maximize data reuse. Then we propose a strategy of partitioning the compute units where the cores within each partition process a batch of input data in a synchronous manner to maximize data reuse but different partitions run asynchronously. Because the partitions stay asynchronous and typically process different CNN layers at any given moment, the memory access traffic sizes of the partitions become statistically shuffled. Thus, the partitioning of compute units and asynchronous use of them make the total memory access traffic size be smoothened over time, and the degree of partitioning determines a tradeoff between data reuse efficiency and memory bandwidth utilization efficiency. We call this smoothing statistical memory traffic shaping, and we show that it can lead to 8.0 percent of performance gain on a commercial 64-core processor when running ResNet-50. [ABSTRACT FROM PUBLISHER]
Details
- Language :
- English
- ISSN :
- 15566056
- Volume :
- 17
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- IEEE Computer Architecture Letters
- Publication Type :
- Academic Journal
- Accession number :
- 128484232
- Full Text :
- https://doi.org/10.1109/LCA.2017.2773055