Back to Search Start Over

Transactional prefetching

Authors :
Osman Unsal
Adria Armejach
Anurag Negi
Adrian Cristal
Per Stenström
Source :
PACT
Publication Year :
2012
Publisher :
ACM, 2012.

Abstract

Memory access latency is the primary performance bottleneck in modern computer systems. Prefetching data before it is needed by a processing core allows substantial performance gains by overlapping significant portions of memory latency with useful work. Prior work has investigated this technique and measured potential benefits in a variety of scenarios. However, its use in speeding up Hardware Transactional Memory (HTM) has remained hitherto unexplored. In several HTM designs transactions invalidate speculatively updated cache lines when they abort. Such cache lines tend to have high locality and are likely to be accessed again when the transaction re-executes. Coarse grained transactions that update several cache lines are particularly susceptible to performance degradation even under moderate contention. However, such transactions show strong locality of reference, especially when contention is high. Prefetching cache lines with high locality can, therefore, improve overall concurrency by speeding up transactions and, thereby, narrowing the window of time in which such transactions persist and can cause contention. Such transactions are important since they are likely to form a common TM use-case. We note that traditional prefetch techniques may not be able to track such lines adequately or issue prefetches quickly enough. This paper investigates the use of prefetching in HTMs, proposing a simple design to identify and request prefetch candidates, and measures performance gains to be had for several representative TM workloads.

Details

Database :
OpenAIRE
Journal :
Proceedings of the 21st international conference on Parallel architectures and compilation techniques
Accession number :
edsair.doi...........3d0215b054d76ea74a4713c08927512b
Full Text :
https://doi.org/10.1145/2370816.2370844