Warped-Preexecution: A Gpu Pre-Execution Approach For Improving Latency Hiding

HPCA(2016)

引用 57|浏览56
暂无评分
摘要
This paper presents a pre-execution approach for improving GPU performance, called P-mode (pre-execution mode). GPUs utilize a number of concurrent threads for hiding processing delay of operations. However, certain long-latency operations such as off-chip memory accesses often take hundreds of cycles and hence leads to stalls even in the presence of thread concurrency and fast thread switching capability. It is unclear if adding more threads can improve latency tolerance due to increased memory contention. Further, adding more threads increases on-chip storage demands. Instead we propose that when a warp is stalled on a long-latency operation it enters P-mode. In P-mode, a warp continues to fetch and decode successive instructions to identify any independent instruction that is not on the long latency dependence chain. These independent instructions are then pre-executed. To tackle write-after-write and write-after-read hazards, during P-mode output values are written to renamed physical registers. We exploit the register file underutilization to re-purpose a few unused registers to store the P-mode results. When a warp is switched from P-mode to normal execution mode it reuses pre-executed results by reading the renamed registers. Any global load operation in P-mode is transformed into a pre-load which fetches data into the L1 cache to reduce future memory access penalties. Our evaluation results show 23% performance improvement for memory intensive applications, without negatively impacting other application categories.
更多
查看译文
关键词
cache storage,concurrency (computers),graphics processing units,multi-threading,GPU performance,L1 cache,P-mode output values,fast thread switching capability,long-latency operations,memory access penalties,memory contention,memory intensive applications,on-chip storage demands,physical registers,pre-execution mode,processing delay,register file underutilization,thread concurrency,write-after-read hazards,write-after-write hazards,
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要