48 results on '"Neftci, Emre"'
Search Results
2. A 22-pJ/spike 73-Mspikes/s 130k-compartment neural array transceiver with conductance-based synaptic and membrane dynamics
- Author
-
Park, Jongkil, Ha, Sohmyung, Yu, Theodore, Neftci, Emre, and Cauwenberghs, Gert
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Affordable and Clean Energy ,neuromorphic cognitive computing ,integrate-and-fire array transceiver ,address event representation ,conductance-based synapse ,dendritic computation ,log-domain translinear circuits ,asynchronous pipelining ,rectified linear unit ,Psychology ,Cognitive Sciences ,Biological psychology - Abstract
Neuromorphic cognitive computing offers a bio-inspired means to approach the natural intelligence of biological neural systems in silicon integrated circuits. Typically, such circuits either reproduce biophysical neuronal dynamics in great detail as tools for computational neuroscience, or abstract away the biology by simplifying the functional forms of neural computation in large-scale systems for machine intelligence with high integration density and energy efficiency. Here we report a hybrid which offers biophysical realism in the emulation of multi-compartmental neuronal network dynamics at very large scale with high implementation efficiency, and yet with high flexibility in configuring the functional form and the network topology. The integrate-and-fire array transceiver (IFAT) chip emulates the continuous-time analog membrane dynamics of 65 k two-compartment neurons with conductance-based synapses. Fired action potentials are registered as address-event encoded output spikes, while the four types of synapses coupling to each neuron are activated by address-event decoded input spikes for fully reconfigurable synaptic connectivity, facilitating virtual wiring as implemented by routing address-event spikes externally through synaptic routing table. Peak conductance strength of synapse activation specified by the address-event input spans three decades of dynamic range, digitally controlled by pulse width and amplitude modulation (PWAM) of the drive voltage activating the log-domain linear synapse circuit. Two nested levels of micro-pipelining in the IFAT architecture improve both throughput and efficiency of synaptic input. This two-tier micro-pipelining results in a measured sustained peak throughput of 73 Mspikes/s and overall chip-level energy efficiency of 22 pJ/spike. Non-uniformity in digitally encoded synapse strength due to analog mismatch is mitigated through single-point digital offset calibration. Combined with the flexibly layered and recurrent synaptic connectivity provided by hierarchical address-event routing of registered spike events through external memory, the IFAT lends itself to efficient large-scale emulation of general biophysical spiking neural networks, as well as rate-based mapping of rectified linear unit (ReLU) neural activations.
- Published
- 2023
3. Building and benchmarking the motivated deception corpus: Improving the quality of deceptive text through gaming
- Author
-
Barsever, Dan, Steyvers, Mark, and Neftci, Emre
- Subjects
Biological Psychology ,Cognitive and Computational Psychology ,Mathematical Sciences ,Statistics ,Psychology ,Deception ,Text ,Machine learning ,Neural networks ,Corpus ,BERT ,Natural language processing ,Truth ,Lie ,Artificial Intelligence and Image Processing ,Cognitive Sciences ,Experimental Psychology ,Biological psychology ,Cognitive and computational psychology - Abstract
When one studies fake news or false reviews, the first step to take is to find a corpus of text samples to work with. However, most deceptive corpora suffer from an intrinsic problem: there is little incentive for the providers of the deception to put their best effort, which risks lowering the quality and realism of the deception. The corpus described in this project, the Motivated Deception Corpus, aims to rectify this problem by gamifying the process of deceptive text collection. By having subjects play the game Two Truths and a Lie, and by rewarding those subjects that successfully fool their peers, we collect samples in such a way that the process itself improves the quality of the text. We have amassed a large corpus of deceptive text that is strongly incentivized to be convincing, and thus more reflective of real deceptive text. We provide results from several configurations of neural network prediction models to establish machine learning benchmarks on the data. This new corpus is demonstratively more challenging to classify with the current state of the art than previous corpora.
- Published
- 2022
4. Design principles for lifelong learning AI accelerators
- Author
-
Kudithipudi, Dhireesha, Daram, Anurag, Zyarah, Abdullah M., Zohora, Fatima Tuz, Aimone, James B., Yanguas-Gil, Angel, Soures, Nicholas, Neftci, Emre, Mattina, Matthew, Lomonaco, Vincenzo, Thiem, Clare D., and Epstein, Benjamin
- Published
- 2023
- Full Text
- View/download PDF
5. Online Few-Shot Gesture Learning on a Neuromorphic Processor
- Author
-
Stewart, Kenneth, Orchard, Garrick, Shrestha, Sumit Bam, and Neftci, Emre
- Subjects
Neuromorphics ,Training ,Hardware ,Gesture recognition ,Artificial neural networks ,Neuromorphic computing ,spiking neural networks ,on-chip learning ,few-shot learning ,online learning ,cs.NE - Abstract
We present the Surrogate-gradient Online Error-triggered Learning (SOEL)system for online few-shot learning on neuromorphic processors. The SOELlearning system uses a combination of transfer learning and principles ofcomputational neuroscience and deep learning. We show that partially traineddeep Spiking Neural Networks (SNNs) implemented on neuromorphic hardware canrapidly adapt online to new classes of data within a domain. SOEL updatestrigger when an error occurs, enabling faster learning with fewer updates.Using gesture recognition as a case study, we show SOEL can be used for onlinefew-shot learning of new classes of pre-recorded gesture data and rapid onlinelearning of new gestures from data streamed live from a Dynamic Active-pixelVision Sensor to an Intel Loihi neuromorphic research processor.
- Published
- 2020
6. Memory Organization for Energy-Efficient Learning and Inference in Digital Neuromorphic Accelerators
- Author
-
Schaefer, Clemens JS, Faley, Patrick, Neftci, Emre O, and Joshi, Siddharth
- Subjects
cs.NE ,cs.LG ,stat.ML - Abstract
The energy efficiency of neuromorphic hardware is greatly affected by theenergy of storing, accessing, and updating synaptic parameters. Various methodsof memory organisation targeting energy-efficient digital accelerators havebeen investigated in the past, however, they do not completely encapsulate theenergy costs at a system level. To address this shortcoming and to account forvarious overheads, we synthesize the controller and memory for differentencoding schemes and extract the energy costs from these synthesized blocks.Additionally, we introduce functional encoding for structured connectivity suchas the connectivity in convolutional layers. Functional encoding offers a 58%reduction in the energy to implement a backward pass and weight update in suchlayers compared to existing index-based solutions. We show that for a 2 layerspiking neural network trained to retain a spatio-temporal pattern, bitmap(PB-BMP) based organization can encode the sparser networks more efficiently.This form of encoding delivers a 1.37x improvement in energy efficiency comingat the cost of a 4% degradation in network retention accuracy as measured bythe van Rossum distance.
- Published
- 2020
7. Spiking Neural Network Learning, Benchmarking, Programming and Executing
- Author
-
Li, Guoqi, Deng, Lei, Chua, Yansong, Li, Peng, Neftci, Emre O, and Li, Haizhou
- Subjects
Neurosciences ,Psychology ,Cognitive Sciences ,Biological psychology - Published
- 2020
8. Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)
- Author
-
Kaiser, Jacques, Mostafa, Hesham, and Neftci, Emre
- Subjects
Biological Psychology ,Biomedical and Clinical Sciences ,Neurosciences ,Psychology ,spiking neural network ,embedded learning ,neuromorphic hardware ,surrogate gradient algorithm ,backpropagataon ,cs.NE ,Cognitive Sciences ,Biological psychology - Abstract
A growing body of work underlines striking similarities between biological neural networks and recurrent, binary neural networks. A relatively smaller body of work, however, addresses the similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks. The challenge preventing this is largely caused by the discrepancy between the dynamical properties of synaptic plasticity and the requirements for gradient backpropagation. Learning algorithms that approximate gradient backpropagation using local error functions can overcome this challenge. Here, we introduce Deep Continuous Local Learning (DECOLLE), a spiking neural network equipped with local error functions for online learning with no memory overhead for computing gradients. DECOLLE is capable of learning deep spatio temporal representations from spikes relying solely on local information, making it compatible with neurobiology and neuromorphic hardware. Synaptic plasticity rules are derived systematically from user-defined cost functions and neural dynamics by leveraging existing autodifferentiation methods of machine learning frameworks. We benchmark our approach on the event-based neuromorphic dataset N-MNIST and DvsGesture, on which DECOLLE performs comparably to the state-of-the-art. DECOLLE networks provide continuously learning machines that are relevant to biology and supportive of event-based, low-power computer vision architectures matching the accuracies of conventional computers on tasks where temporal precision and speed are essential.
- Published
- 2020
9. Embodied Neuromorphic Vision with Continuous Random Backpropagation
- Author
-
Kaiser, Jacques, Friedrich, Alexander, Tieck, J Camilo Vasquez, Reichard, Daniel, Roennau, Arne, Neftci, Emre, and Dillmann, Ruediger
- Subjects
cs.NE ,cs.AI ,cs.LG - Abstract
Spike-based communication between biological neurons is sparse andunreliable. This enables the brain to process visual information from the eyesefficiently. Taking inspiration from biology, artificial spiking neuralnetworks coupled with silicon retinas attempt to model these computations.Recent findings in machine learning allowed the derivation of a family ofpowerful synaptic plasticity rules approximating backpropagation for spikingnetworks. Are these rules capable of processing real-world visual sensory data?In this paper, we evaluate the performance of Event-Driven RandomBack-Propagation (eRBP) at learning representations from event streams providedby a Dynamic Vision Sensor (DVS). First, we show that eRBP matchesstate-of-the-art performance on the DvsGesture dataset with the addition of asimple covert attention mechanism. By remapping visual receptive fieldsrelatively to the center of the motion, this attention mechanism providestranslation invariance at low computational cost compared to convolutions.Second, we successfully integrate eRBP in a real robotic setup, where a roboticarm grasps objects according to detected visual affordances. In this setup,visual information is actively sensed by a DVS mounted on a robotic headperforming microsaccadic eye movements. We show that our method classifiesaffordances within 100ms after microsaccade onset, which is comparable to humanperformance reported in behavioral study. Our results suggest that advances inneuromorphic technology and plasticity rules enable the development ofautonomous robots operating at high speed and low energy consumption.
- Published
- 2020
10. Editorial: Spiking Neural Network Learning, Benchmarking, Programming and Executing.
- Author
-
Li, Guoqi, Deng, Lei, Chua, Yansong, Li, Peng, Neftci, Emre O, and Li, Haizhou
- Subjects
SNN benchmarks ,SNN learning algorithms ,deep spiking neural networks ,neuromorphics ,programming framework ,Neurosciences ,Psychology ,Cognitive Sciences - Published
- 2020
11. Achieving efficient interpretability of reinforcement learning via policy distillation and selective input gradient regularization
- Author
-
Xing, Jinwei, Nagata, Takashi, Zou, Xinyun, Neftci, Emre, and Krichmar, Jeffrey L.
- Published
- 2023
- Full Text
- View/download PDF
12. Error-triggered Three-Factor Learning Dynamics for Crossbar Arrays
- Author
-
Payvand, Melika, Fouda, Mohammed, Kurdahi, Fadi, Eltawil, Ahmed, and Neftci, Emre O
- Subjects
cs.ET - Abstract
Recent breakthroughs suggest that local, approximate gradient descentlearning is compatible with Spiking Neural Networks (SNNs). Although SNNs canbe scalably implemented using neuromorphic VLSI, an architecture that can learnin-situ as accurately as conventional processors is still missing. Here, wepropose a subthreshold circuit architecture designed through insights obtainedfrom machine learning and computational neuroscience that could achieve suchaccuracy. Using a surrogate gradient learning framework, we derive local,error-triggered learning dynamics compatible with crossbar arrays and thetemporal dynamics of SNNs. The derivation reveals that circuits used forinference and training dynamics can be shared, which simplifies the circuit andsuppresses the effects of fabrication mismatch. We present SPICE simulations onXFAB 180nm process, as well as large-scale simulations of the spiking neuralnetworks on event-based benchmarks, including a gesture recognition task. Ourresults show that the number of updates can be reduced hundred-fold compared tothe standard rule while achieving performances that are on par with thestate-of-the-art.
- Published
- 2019
13. On-chip Few-shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor
- Author
-
Stewart, Kenneth, Orchard, Garrick, Shrestha, Sumit Bam, and Neftci, Emre
- Subjects
cs.NE - Abstract
Recent work suggests that synaptic plasticity dynamics in biological modelsof neurons and neuromorphic hardware are compatible with gradient-basedlearning (Neftci et al., 2019). Gradient-based learning requires iteratingseveral times over a dataset, which is both time-consuming and constrains thetraining samples to be independently and identically distributed. This isincompatible with learning systems that do not have boundaries between trainingand inference, such as in neuromorphic hardware. One approach to overcome theseconstraints is transfer learning, where a portion of the network is pre-trainedand mapped into hardware and the remaining portion is trained online. Transferlearning has the advantage that pre-training can be accelerated offline if thetask domain is known, and few samples of each class are sufficient for learningthe target task at reasonable accuracies. Here, we demonstrate on-linesurrogate gradient few-shot learning on Intel's Loihi neuromorphic researchprocessor using features pre-trained with spike-based gradientbackpropagation-through-time. Our experimental results show that the Loihi chipcan learn gestures online using a small number of shots and achieve resultsthat are comparable to the models simulated on a conventional processor.
- Published
- 2019
14. Contrastive Hebbian learning with random feedback weights
- Author
-
Detorakis, Georgios, Bartley, Travis, and Neftci, Emre
- Subjects
Information and Computing Sciences ,Machine Learning ,Neurosciences ,Feedback ,Neural Networks ,Computer ,Random contrastive Hebbian learning ,Supervised learning ,Unsupervised learning ,Random feedback ,cs.LG ,q-bio.NC ,stat.ML ,Artificial Intelligence & Image Processing ,Artificial intelligence ,Machine learning ,Statistics - Abstract
Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the free phase, where the data are fed to the network, and a clamped phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain so far. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning.
- Published
- 2019
15. Surrogate Gradient Learning in Spiking Neural Networks
- Author
-
Neftci, Emre O, Mostafa, Hesham, and Zenke, Friedemann
- Subjects
cs.NE ,q-bio.NC - Abstract
Spiking neural networks are nature's versatile solution to fault-tolerant andenergy efficient signal processing. To translate these benefits into hardware,a growing number of neuromorphic spiking neural network processors attempt toemulate biological neural networks. These developments have created an imminentneed for methods and tools to enable such systems to solve real-world signalprocessing problems. Like conventional neural networks, spiking neural networkscan be trained on real, domain specific data. However, their training requiresovercoming a number of challenges linked to their binary and dynamical nature.This article elucidates step-by-step the problems typically encountered whentraining spiking neural networks, and guides the reader through the keyconcepts of synaptic plasticity and data-driven learning in the spikingsetting. To that end, it gives an overview of existing approaches and providesan introduction to surrogate gradient methods, specifically, as a particularlyflexible and efficient method to overcome the aforementioned challenges.
- Published
- 2019
16. Memory-Efficient Synaptic Connectivity for Spike-Timing- Dependent Plasticity
- Author
-
Pedroni, Bruno U, Joshi, Siddharth, Deiss, Stephen R, Sheik, Sadique, Detorakis, Georgios, Paul, Somnath, Augustine, Charles, Neftci, Emre O, and Cauwenberghs, Gert
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Neurological ,Affordable and Clean Energy ,synaptic plasticity ,neuromorphic computing ,data structure ,memory architecture ,crossbar array ,Psychology ,Cognitive Sciences ,Biological psychology - Abstract
Spike-Timing-Dependent Plasticity (STDP) is a bio-inspired local incremental weight update rule commonly used for online learning in spike-based neuromorphic systems. In STDP, the intensity of long-term potentiation and depression in synaptic efficacy (weight) between neurons is expressed as a function of the relative timing between pre- and post-synaptic action potentials (spikes), while the polarity of change is dependent on the order (causality) of the spikes. Online STDP weight updates for causal and acausal relative spike times are activated at the onset of post- and pre-synaptic spike events, respectively, implying access to synaptic connectivity both in forward (pre-to-post) and reverse (post-to-pre) directions. Here we study the impact of different arrangements of synaptic connectivity tables on weight storage and STDP updates for large-scale neuromorphic systems. We analyze the memory efficiency for varying degrees of density in synaptic connectivity, ranging from crossbar arrays for full connectivity to pointer-based lookup for sparse connectivity. The study includes comparison of storage and access costs and efficiencies for each memory arrangement, along with a trade-off analysis of the benefits of each data structure depending on application requirements and budget. Finally, we present an alternative formulation of STDP via a delayed causal update mechanism that permits efficient weight access, requiring no more than forward connectivity lookup. We show functional equivalence of the delayed causal updates to the original STDP formulation, with substantial savings in storage and access costs and efficiencies for networks with sparse synaptic connectivity as typically encountered in large-scale models in computational neuroscience.
- Published
- 2019
17. Inherent Weight Normalization in Stochastic Neural Networks
- Author
-
Detorakis, Georgios, Dutta, Sourav, Khanna, Abhishek, Jerry, Matthew, Datta, Suman, and Neftci, Emre
- Subjects
cs.LG ,stat.ML ,Psychology ,Cognitive Sciences - Abstract
Multiplicative stochasticity such as Dropout improves the robustness andgeneralizability of deep neural networks. Here, we further demonstrate thatalways-on multiplicative stochasticity combined with simple threshold neuronsare sufficient operations for deep neural networks. We call such models NeuralSampling Machines (NSM). We find that the probability of activation of the NSMexhibits a self-normalizing property that mirrors Weight Normalization, apreviously studied mechanism that fulfills many of the features of BatchNormalization in an online fashion. The normalization of activities duringtraining speeds up convergence by preventing internal covariate shift caused bychanges in the input distribution. The always-on stochasticity of the NSMconfers the following advantages: the network is identical in the inference andlearning phases, making the NSM suitable for online learning, it can exploitstochasticity inherent to a physical substrate such as analog non-volatilememories for in-memory computing, and it is suitable for Monte Carlo sampling,while requiring almost exclusively addition and comparison operations. Wedemonstrate NSMs on standard classification benchmarks (MNIST and CIFAR) andevent-based classification benchmarks (N-MNIST and DVS Gestures). Our resultsshow that NSMs perform comparably or better than conventional artificial neuralnetworks with the same architecture.
- Published
- 2019
18. Neural sampling machine with stochastic synapse allows brain-like learning and inference
- Author
-
Dutta, Sourav, Detorakis, Georgios, Khanna, Abhishek, Grisafe, Benjamin, Neftci, Emre, and Datta, Suman
- Published
- 2022
- Full Text
- View/download PDF
19. Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning
- Author
-
Detorakis, Georgios, Sheik, Sadique, Augustine, Charles, Paul, Somnath, Pedroni, Bruno U, Dutt, Nikil, Krichmar, Jeffrey, Cauwenberghs, Gert, and Neftci, Emre
- Subjects
Biological Psychology ,Biomedical and Clinical Sciences ,Neurosciences ,Psychology ,Bioengineering ,Machine Learning and Artificial Intelligence ,Networking and Information Technology R&D (NITRD) ,Neuromorphic computing ,neuromorphic algorithms ,three-factor learning ,on-line learning ,event-based computing ,spiking neural networks ,cs.NE ,cs.AI ,Cognitive Sciences ,Biological psychology - Abstract
Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, most neuromorphic hardware are trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.
- Published
- 2018
20. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines
- Author
-
Neftci, Emre O, Augustine, Charles, Paul, Somnath, and Detorakis, Georgios
- Subjects
Biological Psychology ,Biomedical and Clinical Sciences ,Neurosciences ,Psychology ,Neurological ,spiking neural networks ,backpropagation algorithm ,feedback alignment ,embedded cognition ,stochastic processes ,cs.NE ,cs.AI ,Cognitive Sciences ,Biological psychology - Abstract
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
- Published
- 2017
21. Training a Probabilistic Graphical Model With Resistive Switching Electronic Synapses
- Author
-
Eryilmaz, Sukru Burc, Neftci, Emre, Joshi, Siddharth, Kim, SangBum, BrightSky, Matthew, Lung, Hsiang-Lan, Lam, Chung, Cauwenberghs, Gert, and Wong, Hon-Sum Philip
- Subjects
Engineering ,Electronics ,Sensors and Digital Hardware ,Affordable and Clean Energy ,Brain-inspired hardware ,cognitive computing ,neuromorphic computing ,phase change memory(PCM) ,resistive memory ,cs.NE ,cs.DC ,cs.ET ,Electrical and Electronic Engineering ,Applied Physics ,Electronics ,sensors and digital hardware - Abstract
Current large-scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy. New memory technologies, such as nanoscale two-terminal resistive switching memory devices, offer a compact, scalable, and low-power alternative that permits on-chip colocated processing and memory in fine-grain distributed parallel architecture. Here, we report the first use of resistive memory devices for implementing and training a restricted Boltzmann machine (RBM), a generative probabilistic graphical model as a key component for unsupervised learning in deep networks. We experimentally demonstrate a 45-synapse RBM realized with 90 resistive phase change memory (PCM) elements trained with a bioinspired variant of the contrastive divergence algorithm, implementing Hebbian and anti-Hebbian weight updates. The resistive PCM devices show a twofold to tenfold reduction in error rate in a missing pixel pattern completion task trained over 30 epochs, compared with untrained case. Measured programming energy consumption is 6.1 nJ per epoch with the PCM devices, a factor of 150 times lower than the conventional processor-memory systems. We analyze and discuss the dependence of learning performance on cycle-to-cycle variations and number of gradual levels in the PCM analog memory devices.
- Published
- 2016
22. Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-Power Neuromorphic Hardware
- Author
-
Diehl, Peter U, Zarrella, Guido, Cassidy, Andrew, Pedroni, Bruno U, and Neftci, Emre
- Subjects
Information and Computing Sciences ,Biomedical and Clinical Sciences ,Neurosciences ,Machine Learning ,Underpinning research ,1.1 Normal biological development and functioning ,Neurological ,cs.NE - Abstract
In recent years the field of neuromorphic low-power systems that consumeorders of magnitude less power gained significant momentum. However, theirwider use is still hindered by the lack of algorithms that can harness thestrengths of such architectures. While neuromorphic adaptations ofrepresentation learning algorithms are now emerging, efficient processing oftemporal sequences or variable length-inputs remain difficult. Recurrent neuralnetworks (RNN) are widely used in machine learning to solve a variety ofsequence learning tasks. In this work we present a train-and-constrainmethodology that enables the mapping of machine learned (Elman) RNNs on asubstrate of spiking neurons, while being compatible with the capabilities ofcurrent and near-future neuromorphic systems. This "train-and-constrain" methodconsists of first training RNNs using backpropagation through time, thendiscretizing the weights and finally converting them to spiking RNNs bymatching the responses of artificial neurons with those of the spiking neurons.We demonstrate our approach by mapping a natural language processing task(question classification), where we demonstrate the entire mapping process ofthe recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", aspike-based digital neuromorphic hardware architecture. TrueNorth imposesspecific constraints on connectivity, neural and synaptic parameters. Tosatisfy these constraints, it was necessary to discretize the synaptic weightsand neural activities to 16 levels, and to limit fan-in to 64 inputs. We findthat short synaptic delays are sufficient to implement the dynamical (temporal)aspect of the RNN in the question classification task. The hardware-constrainedmodel achieved 74% accuracy in question classification while using less than0.025% of the cores on one TrueNorth chip, resulting in an estimated powerconsumption of ~17 uW.
- Published
- 2016
23. Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity
- Author
-
Pedroni, Bruno U, Sheik, Sadique, Joshi, Siddharth, Detorakis, Georgios, Paur, Somnath, Augustine, Charles, Neftci, Emre, and Cauwenberghs, Gert
- Subjects
cs.NE - Abstract
Spike-timing-dependent plasticity (STDP) incurs both causal and acausalsynaptic weight updates, for negative and positive time differences betweenpre-synaptic and post-synaptic spike events. For realizing such updates inneuromorphic hardware, current implementations either require forward andreverse lookup access to the synaptic connectivity table, or rely onmemory-intensive architectures such as crossbar arrays. We present a novelmethod for realizing both causal and acausal weight updates using only forwardlookup access of the synaptic connectivity table, permitting memory-efficientimplementation. A simplified implementation in FPGA, using a single timervariable for each neuron, closely approximates exact STDP cumulative weightupdates for neuron refractory periods greater than 10 ms, and reduces to exactSTDP for refractory periods greater than the STDP time window. Compared toconventional crossbar implementation, the forward table-based implementationleads to substantial memory savings for sparsely connected networks supportingscalable neuromorphic systems with fully reconfigurable synaptic connectivityand plasticity.
- Published
- 2016
24. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
- Author
-
Neftci, Emre O, Pedroni, Bruno U, Joshi, Siddharth, Al-Shedivat, Maruan, and Cauwenberghs, Gert
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Neurological ,cs.NE ,Psychology ,Cognitive Sciences ,Biological psychology - Abstract
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
- Published
- 2016
25. TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth
- Author
-
Diehl, Peter U, Pedroni, Bruno U, Cassidy, Andrew, Merolla, Paul, Neftci, Emre, Zarrella, Guido, and IEEE
- Subjects
q-bio.NC ,cs.NE - Abstract
We present an approach to constructing a neuromorphic device that responds tolanguage input by producing neuron spikes in proportion to the strength of theappropriate positive or negative emotional response. Specifically, we perform afine-grained sentiment analysis task with implementations on two differentsystems: one using conventional spiking neural network (SNN) simulators and theother one using IBM's Neurosynaptic System TrueNorth. Input words are projectedinto a high-dimensional semantic space and processed through a fully-connectedneural network (FCNN) containing rectified linear units trained viabackpropagation. After training, this FCNN is converted to a SNN bysubstituting the ReLUs with integrate-and-fire neurons. We show that there ispractically no performance loss due to conversion to a spiking network on asentiment analysis test set, i.e. correlations between predictions and humanannotations differ by less than 0.02 comparing the original DNN and its spikingequivalent. Additionally, we show that the SNN generated with this techniquecan be mapped to existing neuromorphic hardware -- in our case, the TrueNorthchip. Mapping to the chip involves 4-bit synaptic weight discretization andadjustment of the neuron thresholds. The resulting end-to-end system can take auser input, i.e. a word in a vocabulary of over 300,000 words, and estimate itssentiment on TrueNorth with a power consumption of approximately 50 uW.
- Published
- 2016
26. Event-driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.
- Author
-
Neftci, Emre, Augustine, Charles, Paul, Somnath, and Detorakis, Georgios
- Published
- 2016
27. Learning of Chunking Sequences in Cognition and Behavior.
- Author
-
Fonollosa, Jordi, Neftci, Emre, and Rabinovich, Mikhail
- Subjects
Humans ,Cognition ,Learning ,Mental Recall ,Computational Biology ,Algorithms ,Models ,Neurological ,Computer Simulation ,Neurosciences ,Basic Behavioral and Social Science ,Mental Health ,Behavioral and Social Science ,Models ,Neurological ,Bioinformatics ,Biological Sciences ,Information and Computing Sciences ,Mathematical Sciences - Abstract
We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson's disease and Schizophrenia.
- Published
- 2015
28. Event-driven contrastive divergence: neural sampling foundations
- Author
-
Neftci, Emre, Das, Srinjoy, Pedroni, Bruno, Kreutz-Delgado, Kenneth, and Cauwenberghs, Gert
- Subjects
Biological Psychology ,Biomedical and Clinical Sciences ,Neurosciences ,Psychology ,Markov chain Monte Carlo ,neural sampling ,probabilistic inference ,spiking neurons ,synaptic plasticity ,Cognitive Sciences ,Biological psychology - Published
- 2015
29. Learning Non-deterministic Representations with Energy-based Ensembles
- Author
-
Al-Shedivat, Maruan, Neftci, Emre, and Cauwenberghs, Gert
- Subjects
cs.LG ,cs.NE - Abstract
The goal of a generative model is to capture the distribution underlying thedata, typically through latent variables. After training, these variables areoften used as a new representation, more effective than the original featuresin a variety of learning tasks. However, the representations constructed bycontemporary generative models are usually point-wise deterministic mappingsfrom the original feature space. Thus, even with representations robust toclass-specific transformations, statistically driven models trained on themwould not be able to generalize when the labeled data is scarce. Inspired bythe stochasticity of the synaptic connections in the brain, we introduceEnergy-based Stochastic Ensembles. These ensembles can learn non-deterministicrepresentations, i.e., mappings from the feature space to a family ofdistributions in the latent space. These mappings are encoded in a distributionover a (possibly infinite) collection of models. By conditionally samplingmodels from the ensemble, we obtain multiple representations for every inputexample and effectively augment the data. We propose an algorithm similar tocontrastive divergence for training restricted Boltzmann stochastic ensembles.Finally, we demonstrate the concept of the stochastic representations on asynthetic dataset as well as test them in the one-shot learning scenario onMNIST.
- Published
- 2014
30. PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems
- Author
-
Stefanini, Fabio, Neftci, Emre O, Sheik, Sadique, and Indiveri, Giacomo
- Subjects
Networking and Information Technology R&D (NITRD) ,Affordable and Clean Energy ,AER ,NHML ,Python ,VLSI ,neuromorphic systems ,spiking neural network ,Neurosciences ,Cognitive Sciences - Abstract
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
- Published
- 2014
31. Event-driven contrastive divergence for spiking neuromorphic systems
- Author
-
Neftci, Emre, Das, Srinjoy, Pedroni, Bruno, Kreutz-Delgado, Kenneth, and Cauwenberghs, Gert
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,synaptic plasticity ,neuromorphic cognition ,Markov chain monte carlo ,recurrent neural network ,generative model ,cs.NE ,q-bio.NC ,Psychology ,Cognitive Sciences ,Biological psychology - Abstract
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
- Published
- 2014
32. Data and Power Efficient Intelligence with Neuromorphic Learning Machines
- Author
-
Neftci, Emre O.
- Published
- 2018
- Full Text
- View/download PDF
33. Event-driven contrastive divergence for spiking neuromorphic systems.
- Author
-
Neftci, Emre, Das, Srinjoy, Pedroni, Bruno, Kreutz-Delgado, Kenneth, and Cauwenberghs, Gert
- Subjects
Markov chain monte carlo ,generative model ,neuromorphic cognition ,recurrent neural network ,synaptic plasticity ,cs.NE ,q-bio.NC ,Neurosciences ,Cognitive Sciences ,Psychology - Abstract
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
- Published
- 2013
34. Reinforcement learning in artificial and biological systems
- Author
-
Neftci, Emre O. and Averbeck, Bruno B.
- Published
- 2019
- Full Text
- View/download PDF
35. Editorial: Neuro-inspired computing for next-gen AI: Computing model, architectures and learning algorithms.
- Author
-
Pantazi, Angeliki, Rajendran, Bipin, Simeone, Osvaldo, and Neftci, Emre
- Subjects
MACHINE learning ,ARTIFICIAL neural networks - Published
- 2022
- Full Text
- View/download PDF
36. Brain-Inspired Learning on Neuromorphic Substrates.
- Author
-
Zenke, Friedemann and Neftci, Emre O.
- Subjects
RECURRENT neural networks ,MACHINE learning ,SPARSE approximations ,DEEP learning ,NEUROMORPHICS - Abstract
Neuromorphic hardware strives to emulate brain-like neural networks and thus holds the promise for scalable, low-power information processing on temporal data streams. Yet, to solve real-world problems, these networks need to be trained. However, training on neuromorphic substrates creates significant challenges due to the offline character and the required nonlocal computations of gradient-based learning algorithms. This article provides a mathematical framework for the design of practical online learning algorithms for neuromorphic substrates. Specifically, we show a direct connection between real-time recurrent learning (RTRL), an online algorithm for computing gradients in conventional recurrent neural networks (RNNs), and biologically plausible learning rules for training spiking neural networks (SNNs). Furthermore, we motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm’s computational complexity, diminishes the nonlocal information requirements, and empirically leads to good learning performance, thereby improving its applicability to neuromorphic substrates. In summary, our framework bridges the gap between synaptic plasticity and gradient-based approaches from deep learning and lays the foundations for powerful information processing on future neuromorphic hardware systems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.
- Author
-
Neftci, Emre O., Augustine, Charles, Paul, Somnath, and Detorakis, Georgios
- Subjects
ARTIFICIAL neural networks ,DEEP learning ,BACK propagation - Abstract
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
38. Memristor-based neural networks: Synaptic versus neuronal stochasticity.
- Author
-
Naous, Rawan, AlShedivat, Maruan, Neftci, Emre, Cauwenberghs, Gert, and Salama, Khaled Nabil
- Subjects
MEMRISTORS ,ELECTRIC resistors - Abstract
In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
39. Dynamic State and Parameter Estimation Applied to Neuromorphic Systems.
- Author
-
Ozgur Neftci, Emre, Toth, Bryan, Indiveri, Giacomo, and Abarbanel, Henry D. I.
- Subjects
- *
ARTIFICIAL neural networks , *PARAMETER estimation , *INFORMATION theory , *VERY large scale circuit integration , *COMPUTER software - Abstract
Neuroscientists often propose detailed computational models to probe the properties of the neural systems they study. With the advent of neuromorphic engineering, there is an increasing number of hardware electronic analogs of biological neural systems being proposed as well. However, for both biological and hardware systems, it is often difficult to estimate the parameters of the model so that they are meaningful to the experimental system under study, especially when these models involve a large number of states and parameters that cannot be simultaneously measured. We have developed a procedure to solve this problem in the context of interacting neural populations using a recently developed dynamic state and parameter estimation (DSPE) technique. This technique uses synchronization as a tool for dynamically coupling experimentally measured data to its corresponding model to determine its parameters and internal state variables. Typically experimental data are obtained from the biological neural system and the model is simulated in software; here we show that this technique is also efficient in validating proposed network models for neuromorphic spike-based very large-scale integration (VLSI) chips and that it is able to systematically extract network parameters such as synaptic weights, time constants, and other variables that are not accessible by direct observation. Our results suggest that this method can become a very useful tool formodel-based identification and configuration of neuromorphic multichip VLSI systems. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
40. A Systematic Method for Configuring VLSI Networks of Spiking Neurons.
- Author
-
Neftci, Emre, Chicca, Elisabetta, Indiveri, Giacomo, and Douglas, Rodney
- Subjects
- *
VERY large scale circuit integration , *COMPUTER networks , *ARTIFICIAL neural networks , *NEURONS , *ROBOTICS , *DIGITAL electronics , *MATHEMATICAL models , *COMPLEMENTARY metal oxide semiconductors - Abstract
An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike real-world behavior in hardware and robotic systems rather than simply simulating their performance on general-purpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable high-level configuration methods of the kind that have already been developed over many decades for simulations on general-purpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter mapping technique that permits an automatic configuration of VLSI neural networks so that their electronic emulation conforms to a higher-level neuronal simulation. We show that the neurons configured by our method exhibit spike timing statistics and temporal dynamics that are the same as those observed in the software simulated neurons and, in particular, that the key parameters of recurrent VLSI neural networks (e.g., implementing soft winner-take-all ) can be precisely tuned. The proposed method permits a seamless integration between software simulations with hardware emulations and intertranslatability between the parameters of abstract neuronal models and their emulation counterparts. Most important, our method offers a route toward a high-level task configuration language for neuromorphic VLSI systems. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
41. Synthesizing cognition in neuromorphic electronic systems.
- Author
-
Neftci, Emre, Binas, Jonathan, Rutishauser, Ueli, Chicca, Elisabetta, Indiveri, Giacomo, and Douglas, Rodney J.
- Subjects
- *
NEURAL circuitry , *COGNITION - Abstract
An abstract of the article "Synthesizing cognition in neuromorphic electronic systems," by Emre Neftci and colleagues is presented.
- Published
- 2013
- Full Text
- View/download PDF
42. A 22-pJ/spike 73-Mspikes/s 130k-compartment neural array transceiver with conductance-based synaptic and membrane dynamics.
- Author
-
Park J, Ha S, Yu T, Neftci E, and Cauwenberghs G
- Abstract
Neuromorphic cognitive computing offers a bio-inspired means to approach the natural intelligence of biological neural systems in silicon integrated circuits. Typically, such circuits either reproduce biophysical neuronal dynamics in great detail as tools for computational neuroscience, or abstract away the biology by simplifying the functional forms of neural computation in large-scale systems for machine intelligence with high integration density and energy efficiency. Here we report a hybrid which offers biophysical realism in the emulation of multi-compartmental neuronal network dynamics at very large scale with high implementation efficiency, and yet with high flexibility in configuring the functional form and the network topology. The integrate-and-fire array transceiver (IFAT) chip emulates the continuous-time analog membrane dynamics of 65 k two-compartment neurons with conductance-based synapses. Fired action potentials are registered as address-event encoded output spikes, while the four types of synapses coupling to each neuron are activated by address-event decoded input spikes for fully reconfigurable synaptic connectivity, facilitating virtual wiring as implemented by routing address-event spikes externally through synaptic routing table. Peak conductance strength of synapse activation specified by the address-event input spans three decades of dynamic range, digitally controlled by pulse width and amplitude modulation (PWAM) of the drive voltage activating the log-domain linear synapse circuit. Two nested levels of micro-pipelining in the IFAT architecture improve both throughput and efficiency of synaptic input. This two-tier micro-pipelining results in a measured sustained peak throughput of 73 Mspikes/s and overall chip-level energy efficiency of 22 pJ/spike. Non-uniformity in digitally encoded synapse strength due to analog mismatch is mitigated through single-point digital offset calibration. Combined with the flexibly layered and recurrent synaptic connectivity provided by hierarchical address-event routing of registered spike events through external memory, the IFAT lends itself to efficient large-scale emulation of general biophysical spiking neural networks, as well as rate-based mapping of rectified linear unit (ReLU) neural activations., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2023 Park, Ha, Yu, Neftci and Cauwenberghs.)
- Published
- 2023
- Full Text
- View/download PDF
43. Visualizing a joint future of neuroscience and neuromorphic engineering.
- Author
-
Zenke F, Bohté SM, Clopath C, Comşa IM, Göltz J, Maass W, Masquelier T, Naud R, Neftci EO, Petrovici MA, Scherr F, and Goodman DFM
- Subjects
- Biomedical Engineering methods, Forecasting, Humans, Neurons physiology, Neurosciences methods, Biomedical Engineering trends, Models, Neurological, Neural Networks, Computer, Neurosciences trends
- Abstract
Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting., Competing Interests: Declaration of interests R.N. has filed a provisional patent application: “Deep learning method with spiking units and spiking neural network system and neuromorphic device.” I.M.C. is an employee of Google LLC. Portions of the work by Comşa et al. (2019) are covered by pending PCT Patent Application No. PCT/US2019/055848 (“Temporal Coding in Leaky Spiking Neural Networks”), filed by Google in 2019., (Copyright © 2021.)
- Published
- 2021
- Full Text
- View/download PDF
44. Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE).
- Author
-
Kaiser J, Mostafa H, and Neftci E
- Abstract
A growing body of work underlines striking similarities between biological neural networks and recurrent, binary neural networks. A relatively smaller body of work, however, addresses the similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks. The challenge preventing this is largely caused by the discrepancy between the dynamical properties of synaptic plasticity and the requirements for gradient backpropagation. Learning algorithms that approximate gradient backpropagation using local error functions can overcome this challenge. Here, we introduce Deep Continuous Local Learning (DECOLLE), a spiking neural network equipped with local error functions for online learning with no memory overhead for computing gradients. DECOLLE is capable of learning deep spatio temporal representations from spikes relying solely on local information, making it compatible with neurobiology and neuromorphic hardware. Synaptic plasticity rules are derived systematically from user-defined cost functions and neural dynamics by leveraging existing autodifferentiation methods of machine learning frameworks. We benchmark our approach on the event-based neuromorphic dataset N-MNIST and DvsGesture, on which DECOLLE performs comparably to the state-of-the-art. DECOLLE networks provide continuously learning machines that are relevant to biology and supportive of event-based, low-power computer vision architectures matching the accuracies of conventional computers on tasks where temporal precision and speed are essential., (Copyright © 2020 Kaiser, Mostafa and Neftci.)
- Published
- 2020
- Full Text
- View/download PDF
45. Editorial: Spiking Neural Network Learning, Benchmarking, Programming and Executing.
- Author
-
Li G, Deng L, Chua Y, Li P, Neftci EO, and Li H
- Published
- 2020
- Full Text
- View/download PDF
46. Contrastive Hebbian learning with random feedback weights.
- Author
-
Detorakis G, Bartley T, and Neftci E
- Subjects
- Feedback, Machine Learning, Neural Networks, Computer
- Abstract
Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the free phase, where the data are fed to the network, and a clamped phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain so far. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning., (Copyright © 2019 Elsevier Ltd. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
47. PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems.
- Author
-
Stefanini F, Neftci EO, Sheik S, and Indiveri G
- Abstract
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
- Published
- 2014
- Full Text
- View/download PDF
48. Dynamic state and parameter estimation applied to neuromorphic systems.
- Author
-
Neftci EO, Toth B, Indiveri G, and Abarbanel HD
- Subjects
- Animals, Humans, Computer Simulation, Models, Neurological, Neural Networks, Computer, Neurons physiology
- Abstract
Neuroscientists often propose detailed computational models to probe the properties of the neural systems they study. With the advent of neuromorphic engineering, there is an increasing number of hardware electronic analogs of biological neural systems being proposed as well. However, for both biological and hardware systems, it is often difficult to estimate the parameters of the model so that they are meaningful to the experimental system under study, especially when these models involve a large number of states and parameters that cannot be simultaneously measured. We have developed a procedure to solve this problem in the context of interacting neural populations using a recently developed dynamic state and parameter estimation (DSPE) technique. This technique uses synchronization as a tool for dynamically coupling experimentally measured data to its corresponding model to determine its parameters and internal state variables. Typically experimental data are obtained from the biological neural system and the model is simulated in software; here we show that this technique is also efficient in validating proposed network models for neuromorphic spike-based very large-scale integration (VLSI) chips and that it is able to systematically extract network parameters such as synaptic weights, time constants, and other variables that are not accessible by direct observation. Our results suggest that this method can become a very useful tool for model-based identification and configuration of neuromorphic multichip VLSI systems.
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.