Improving Performance of Network-on-Memory Architectures via (De-)/Compression-in-DRAM

Arghavan Mohammadhassani,Anup Das

2023 ACM/IEEE SYSTEM LEVEL INTERCONNECT PATHFINDING WORKSHOP, SLIP 2023(2023)

引用 0|浏览0
暂无评分
摘要
Network-on-chips (NoCs) are envisioned to be a scalable communication substrate for Network-on-Memory (NoM) architectures. However, modern data-intensive workloads continue to overwhelm the NoC link capacity, dramatically increasing memory service latency and causing a great performance loss. We introduce DECORAM, a data (de-)/compression scheme implemented within a DRAM-based NoM architecture. DECORAM uses a lookup table (LUT) to store compressed codes of common data patterns, and exploits this LUT during LLC misses to transmit these codes via NoC, instead of the original uncompressed data. We formulate compression and decompression mechanisms as a combination of LUT-based pattern matching and prefix concatenation, which are implemented using low-latency DRAM row activations and exploiting analog properties of the DRAM cell. To support DECORAM, we introduce a minimal design change of adding isolation transistors in a subarray to activate inter-subarray data movement based on the content of its row buffer. Our DECORAM controller reduces the compression and decompression latency by exploiting subarray-level parallelism to compress/decompress several CPU data misses, simultaneously. We evaluate DECORAM using dataintensive workloads from SPEC, APACHE, PARSEC, and in-memory computing benchmark suites. Our results show that compared to a baseline NoM, DECORAM significantly improves performance (average 30%) and energy (average 32%). Compared to a conventional NoC compression mechanism, DECORAM reduces memory area by 27% and energy by 12%, while delivering 7% higher performance improvement.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要