Back to Search
Start Over
Divergence Analysis with Affine Constraints.
- Source :
- 2012 IEEE 24th International Symposium on Computer Architecture & High Performance Computing; 1/ 1/2012, p67-74, 8p
- Publication Year :
- 2012
-
Abstract
- The rising popularity of graphics processing units is bringing renewed interest in code optimization techniques for SIMD processors. Many of these optimizations rely on divergence analyses, which classify variables as uniform, if they have the same value on every thread, or divergent, if they might not. This paper introduces a new kind of divergence analysis, that is able to represent variables as affine functions of thread identifiers. We have implemented this analysis in Ocelot, an open source compiler, and use it to analyze a suite of 177 CUDA kernels from well-known benchmarks. We can mark about one fourth of all program variables as affine functions of thread identifiers. In addition to the novel divergence analysis, we also introduce the notion of a divergence aware register allocator. This allocator uses information from our analysis to either rematerialize affine variables, or to move uniform variables to shared memory. As a testimony of its effectiveness, our divergence aware allocator produces GPU code that is 29.70% faster than the code produced by Ocelot's register allocator. Divergence analysis with affine constraints is publicly available in the Ocelot compiler since June/2012. [ABSTRACT FROM PUBLISHER]
Details
- Language :
- English
- ISBNs :
- 9781467347907
- Database :
- Complementary Index
- Journal :
- 2012 IEEE 24th International Symposium on Computer Architecture & High Performance Computing
- Publication Type :
- Conference
- Accession number :
- 86539132
- Full Text :
- https://doi.org/10.1109/SBAC-PAD.2012.22