For more than 50 years, the capabilities of Von Neumann-style information processing systems — in which a "memory" delivers operations and then operands to a dedicated "central processing unit" — have improved dramatically. While it may seem that this remarkable history was driven by ever-increasing density (Moore's Law), the actual driver was Dennard's Law: the amazing realization that each generation of scaled-down transistors could actually perform better, in every way, than the previous generation. Unfortunately, Dennard's Law terminated some years ago, and as a result, Moore's Law is now slowing considerably. In a search for ways to continue to improve computing systems, the attention of the IT industry has turned to Non-Von Neumann algorithms, and in particular, to computing architectures motivated by the human brain. At the same time, memory technology has been going through a period of rapid change, as new nonvolatile memories (NVM) — such as Phase Change Memory (PCM), Resistance RAM (RRAM), and Spin-Torque-Transfer Magnetic RAM (STT-MRAM) — emerge that complement and augment the traditional triad of SRAM, DRAM, and Flash. Such memories could enable Storage-Class Memory (SCM) — an emerging memory category that seeks to combine the high performance and robustness of solid-state memory with the long-term retention and low cost of conventional hard-disk magnetic storage. Such large arrays of NVM can also be used in non-Von Neumann neuromorphic computational schemes, with device conductance serving as the plastic (modifiable) “weight” of each “native” synaptic device. This is an attractive application for these devices, because while many synaptic weights are required, requirements on yield and variability can be more relaxed. However, work in this field has remained highly qualitative in nature, and slow to scale in size. In this talk, we will discuss our recent work on scaling NVM-based neural networks in size while quantitatively assessing engineering tradeoffs [1]. We demonstrate a 3-layer neural network of 164,885 synapses, each implemented with two PCM devices, trained on a subset (5000 examples) of the MNIST database of handwritten digits. A weight-update rule compatible for NVM+selector crossbar arrays is presented, as well as a “G-diamond” concept that illustrates problems created by nonlinearity and asymmetry in NVM conductance response. A neural network (NN) simulator matched to the experimental demonstrator allows extensive tolerancing. NVM-based Neural Networks are found to be highly resilient to random effects (NVM variability, yield, and stochasticity), but highly sensitive to “gradient” effects that act to steer all synaptic weights. Low “learning-rate” is shown to be advantageous for both high accuracy and low training energy. Both the SCM and the neuromorphic applications become more attractive as the NVM arrays become large. However, in order to enable large crossbar arrays, a highly nonlinear access device (AD) is also required (in addition to the NVM devices themselves). We will also review our past work on high-performance ADs based on Cu-containing Mixed-Ionic-Electronic Conduction (MIEC) materials [2]. These devices require only the low processing temperatures of the Back-End-Of-the-Line (BEOL), making them highly suitable for implementing multi-layer cross-bar arrays. MIEC-based ADs offer large ON/OFF ratios (>1e7), a significant voltage margin Vm (over which current [1] G. W. Burr, R. Shelby, C. di Nolfo, J. Jang, R. Shenoy, P. Narayanan, K. Virwani, E. Giacometti, B. Kurdi and H. Hwang, "Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element," IEDM Technical Digest, page 29.5, (2014). [2] R. S. Shenoy, G. W. Burr, K. Virwani, B. Jackson, A. Padilla, P. Narayanan, C. Rettner, R. M. Shelby, D. S. Bethune, K. Raman, M. BrightSky, E. Joseph, P. M. Rice, T. Topuria, A. J. Kellock, B. Kurdi, and K. Gopalakrishnan, "MIEC (Mixed-Ionic-Electronic-Conduction)-based access devices for non-volatile crossbar memory arrays," Semiconductor Science and Technology, 29(10), 104005, (2014).