Back to Search Start Over

Low Power In-Memory Implementation of Ternary Neural Networks with Resistive RAM-Based Synapse

Authors :
E. Vianello
Axel Laborieux
Marc Bocquet
Jacques-Olivier Klein
Damien Querlioz
E. Nowak
Jean-Michel Portal
Tifenn Hirtzlin
L. Herrera Diez
Centre de Nanosciences et de Nanotechnologies (C2N)
Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)
Institut des Matériaux, de Microélectronique et des Nanosciences de Provence (IM2NP)
Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS)-Aix Marseille Université (AMU)
Commissariat à l'énergie atomique et aux énergies alternatives - Laboratoire d'Electronique et de Technologie de l'Information (CEA-LETI)
Direction de Recherche Technologique (CEA) (DRT (CEA))
Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)
Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS)
Source :
2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), AICAS, 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2020, Genova (virtual), Italy. ⟨10.1109/AICAS48895.2020.9073877⟩
Publisher :
IEEE

Abstract

The design of systems implementing low precision neural networks with emerging memories such as resistive random access memory (RRAM) is a major lead for reducing the energy consumption of artificial intelligence (AI). Multiple works have for example proposed in-memory architectures to implement low power binarized neural networks. These simple neural networks, where synaptic weights and neuronal activations assume binary values, can indeed approach state-of-the-art performance on vision tasks. In this work, we revisit one of these architectures where synapses are implemented in a differential fashion to reduce bit errors, and synaptic weights are read using precharge sense amplifiers. Based on experimental measurements on a hybrid 130 nm CMOS/RRAM chip and on circuit simulation, we show that the same memory array architecture can be used to implement ternary weights instead of binary weights, and that this technique is particularly appropriate if the sense amplifier is operated in near-threshold regime. We also show based on neural network simulation on the CIFAR-10 image recognition task that going from binary to ternary neural networks significantly increases neural network performance. These results highlight that AI circuits function may sometimes be revisited when operated in low power regimes.

Details

Language :
English
ISBN :
978-1-72814-922-6
ISBNs :
9781728149226
Database :
OpenAIRE
Journal :
2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), AICAS, 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2020, Genova (virtual), Italy. ⟨10.1109/AICAS48895.2020.9073877⟩
Accession number :
edsair.doi.dedup.....dcfea00924662d744bc49de4f743d96d
Full Text :
https://doi.org/10.1109/aicas48895.2020.9073877