Chrome Extension
WeChat Mini Program
Use on ChatGLM

MANA: Microarchitecting a Temporal Instruction Prefetcher

IEEE Transactions on Computers(2023)

Cited 2|Views18
No score
Abstract
L1 instruction (L1-I) cache misses are a source of performance bottleneck. While many instruction prefetchers have been proposed over the years, most of them leave a considerable potential uncovered. In 2011, Proactive Instruction Fetch (PIF) showed that a hardware prefetcher could effectively eliminate all instruction-cache misses. However, its enormous storage cost makes it an impractical solution. Consequently, reducing the storage cost was the main research focus in instruction prefetching in the past decade. Several instruction prefetchers, including RDIP and Shotgun, were proposed to offer PIF-level performance with significantly lower storage overhead. However, our findings show that there is a considerable performance gap between these proposals and PIF. While these proposals use different mechanisms for instruction prefetching, the performance gap is mainly not because of the mechanism, and instead, is due to not having sufficient storage. In this paper, we make the case that the key to designing a powerful and cost-effective instruction prefetcher is choosing a metadata record and microarchitecting the prefetcher to minimize the storage. We propose MANA, which offers PIF-level performance with 15.7x lower storage cost. MANA outperforms RDIP and Shotgun by 12.5 and 29%, respectively. We also evaluate a version of MANAwith no storage overhead and show that it offers 98% of the peak performance benefits.
More
Translated text
Key words
Processors,frontend bottleneck,instruction prefetching,instruction cache
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined