Back to Search Start Over

In-Datacenter Performance Analysis of a Tensor Processing Unit

Authors :
Jouppi, Norman P.
Young, Cliff
Patil, Nishant
Patterson, David
Agrawal, Gaurav
Bajwa, Raminder
Bates, Sarah
Bhatia, Suresh
Boden, Nan
Borchers, Al
Boyle, Rick
Cantin, Pierre-luc
Chao, Clifford
Clark, Chris
Coriell, Jeremy
Daley, Mike
Dau, Matt
Dean, Jeffrey
Gelb, Ben
Ghaemmaghami, Tara Vazir
Gottipati, Rajendra
Gulland, William
Hagmann, Robert
Ho, C. Richard
Hogberg, Doug
Hu, John
Hundt, Robert
Hurt, Dan
Ibarz, Julian
Jaffey, Aaron
Jaworski, Alek
Kaplan, Alexander
Khaitan, Harshit
Koch, Andy
Kumar, Naveen
Lacy, Steve
Laudon, James
Law, James
Le, Diemthu
Leary, Chris
Liu, Zhuyuan
Lucke, Kyle
Lundin, Alan
MacKean, Gordon
Maggiore, Adriana
Mahony, Maire
Miller, Kieran
Nagarajan, Rahul
Narayanaswami, Ravi
Ni, Ray
Nix, Kathy
Norrie, Thomas
Omernick, Mark
Penukonda, Narayana
Phelps, Andy
Ross, Jonathan
Ross, Matt
Salek, Amir
Samadiani, Emad
Severn, Chris
Sizikov, Gregory
Snelham, Matthew
Souter, Jed
Steinberg, Dan
Swing, Andy
Tan, Mercedes
Thorson, Gregory
Tian, Bo
Toma, Horia
Tuttle, Erick
Vasudevan, Vijay
Walter, Richard
Wang, Walter
Wilcox, Eric
Yoon, Doe Hyun
Publication Year :
2017

Abstract

Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Moreover, using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.<br />Comment: 17 pages, 11 figures, 8 tables. To appear at the 44th International Symposium on Computer Architecture (ISCA), Toronto, Canada, June 24-28, 2017

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1704.04760
Document Type :
Working Paper