Back to Search Start Over

A Scalable Near-Memory Architecture for Training Deep Neural Networks on Large In-Memory Datasets.

Authors :
Schuiki, Fabian
Schaffner, Michael
Gurkaynak, Frank K.
Benini, Luca
Source :
IEEE Transactions on Computers; 4/1/2019, Vol. 68 Issue 4, p484-497, 14p
Publication Year :
2019

Abstract

Most investigations into near-memory hardware accelerators for deep neural networks have primarily focused on inference, while the potential of accelerating training has received relatively little attention so far. Based on an in-depth analysis of the key computational patterns in state-of-the-art gradient-based training methods, we propose an efficient near-memory acceleration engine called NTX that can be used to train state-of-the-art deep convolutional neural networks at scale. Our main contributions are: (i) a loose coupling of RISC-V cores and NTX co-processors reducing offloading overhead by $7\times$ 7 × over previously published results; (ii) an optimized IEEE 754 compliant data path for fast high-precision convolutions and gradient propagation; (iii) evaluation of near-memory computing with NTX embedded into residual area on the Logic Base die of a Hybrid Memory Cube; and (iv) a scaling analysis to meshes of HMCs in a data center scenario. We demonstrate a $2.7\times$ 2. 7 × energy efficiency improvement of NTX over contemporary GPUs at $4.4\times$ 4. 4 × less silicon area, and a compute performance of 1.2 Tflop/s for training large state-of-the-art networks with full floating-point precision. At the data center scale, a mesh of NTX achieves above 95 percent parallel and energy efficiency, while providing $2.1\times$ 2. 1 × energy savings or $3.1\times$ 3. 1 × performance improvement over a GPU-based system. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189340
Volume :
68
Issue :
4
Database :
Complementary Index
Journal :
IEEE Transactions on Computers
Publication Type :
Academic Journal
Accession number :
135356287
Full Text :
https://doi.org/10.1109/TC.2018.2876312