1. Scaling Performance via Self-Tuning Approximation for Graphics Engines
- Author
-
D. Anoushe Jamshidi, Janghaeng Lee, Mehrzad Samadi, Amir Hormati, and Scott Mahlke
- Subjects
Runtime system ,CUDA ,Speedup ,General Computer Science ,Computer science ,Image processing ,Thread (computing) ,Parallel computing ,Compiler ,Graphics ,computer.software_genre ,computer ,Microarchitecture - Abstract
Approximate computing, where computation accuracy is traded off for better performance or higher data throughput, is one solution that can help data processing keep pace with the current and growing abundance of information. For particular domains, such as multimedia and learning algorithms, approximation is commonly used today. We consider automation to be essential to provide transparent approximation, and we show that larger benefits can be achieved by constructing the approximation techniques to fit the underlying hardware. Our target platform is the GPU because of its high performance capabilities and difficult programming challenges that can be alleviated with proper automation. Our approach—SAGE—combines a static compiler that automatically generates a set of CUDA kernels with varying levels of approximation with a runtime system that iteratively selects among the available kernels to achieve speedup while adhering to a target output quality set by the user. The SAGE compiler employs three optimization techniques to generate approximate kernels that exploit the GPU microarchitecture: selective discarding of atomic operations, data packing, and thread fusion. Across a set of machine learning and image processing kernels, SAGE's approximation yields an average of 2.5× speedup with less than 10% quality loss compared to the accurate execution on a NVIDIA GTX 560 GPU.
- Published
- 2014