1. Experimental Demonstration of Multilevel Resistive Random Access Memory Programming for up to Two Months Stable Neural Networks Inference Accuracy
- Author
-
Eduardo Esmanhotto, Tifenn Hirtzlin, Djohan Bonnet, Niccolo Castellani, Jean-Michel Portal, Damien Querlioz, Elisa Vianello, Commissariat à l'énergie atomique et aux énergies alternatives - Laboratoire d'Electronique et de Technologie de l'Information (CEA-LETI), Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Institut des Matériaux, de Microélectronique et des Nanosciences de Provence (IM2NP), Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS), Centre de Nanosciences et de Nanotechnologies (C2N), Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS), and ANR-18-CE24-0009,NEURONIC,Réseau Neuronal Binaire à base d'architecture hybride de mémoires intégrant des fonctions de calcul (CMOS/RRAM) pour la fusion de capteurs(2018)
- Subjects
[SPI.NANO]Engineering Sciences [physics]/Micro and nanotechnologies/Microelectronics ,General Economics, Econometrics and Finance - Abstract
International audience; In recent years, artificial intelligence has reached significant milestones with the development of deep neural networks, but it suffers from a major limitation: its considerable energy consumption. [1] This limitation is primarily due to the energy cost of exchanging information between computation and memory units. [2,3] Memristors, also called resistive random access memories (RRAMs) in industrial laboratories, now provide an opportunity to increase the energy efficiency of AI dramatically. In contrast to the complementary metal-oxide-semiconductor (CMOS)based memories such as static or dynamic random access memories, which store one bit per unit cell, they can be programmed to intermediate states between their lowest and highest resistance values, allowing memorizing the synaptic weights of a neural network in a particularly compact manner. [4] In addition, using the fundamental laws of electric circuits, arrays of memristors can implement deep learning's most basic operation, multiply and accumulate (MAC): the multiply operation corresponds to Ohm's law, whereas the accumulate operation corresponds to Kirchhoff 's current law. This type of "in-memory" computation consumes less power than equivalent digital implementations [5-9] : the computation is performed directly within memory, allowing the suppression of the energy associated with weight movement. [4,10,11] Moreover, nonvolatility offers an instant on/off feature: memristor-based systems can perform inference immediately after being turned on, allowing to cut the power supply entirely as soon as the system is not used.
- Published
- 2022
- Full Text
- View/download PDF