Bandwidth-Effective DRAM Cache for GPUs with Storage-Class Memory
arxiv(2024)
摘要
We propose overcoming the memory capacity limitation of GPUs with
high-capacity Storage-Class Memory (SCM) and DRAM cache. By significantly
increasing the memory capacity with SCM, the GPU can capture a larger fraction
of the memory footprint than HBM for workloads that oversubscribe memory,
achieving high speedups. However, the DRAM cache needs to be carefully designed
to address the latency and BW limitations of the SCM while minimizing cost
overhead and considering GPU's characteristics. Because the massive number of
GPU threads can thrash the DRAM cache, we first propose an SCM-aware DRAM cache
bypass policy for GPUs that considers the multi-dimensional characteristics of
memory accesses by GPUs with SCM to bypass DRAM for data with low performance
utility. In addition, to reduce DRAM cache probes and increase effective DRAM
BW with minimal cost, we propose a Configurable Tag Cache (CTC) that repurposes
part of the L2 cache to cache DRAM cacheline tags. The L2 capacity used for the
CTC can be adjusted by users for adaptability. Furthermore, to minimize DRAM
cache probe traffic from CTC misses, our Aggregated Metadata-In-Last-column
(AMIL) DRAM cache organization co-locates all DRAM cacheline tags in a single
column within a row. The AMIL also retains the full ECC protection, unlike
prior DRAM cache's Tag-And-Data (TAD) organization. Additionally, we propose
SCM throttling to curtail power and exploiting SCM's SLC/MLC modes to adapt to
workload's memory footprint. While our techniques can be used for different
DRAM and SCM devices, we focus on a Heterogeneous Memory Stack (HMS)
organization that stacks SCM dies on top of DRAM dies for high performance.
Compared to HBM, HMS improves performance by up to 12.5x (2.9x overall) and
reduces energy by up to 89.3
reduce DRAM cache probe and SCM write traffic by 91-93
respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要