Plug N’ PIM: An integration strategy for Processing-in-Memory accelerators

Integration(2023)

引用 0|浏览13
暂无评分
摘要
Processing-in-Memory (PIM) devices have reemerged as a promise to mitigate the memory-wall and the limitations of transferring massive amount of data from main memories to the host processors. Novel memory technologies and the advent of 3D-stacked integration have provided means to compute data in-memory, either by exploring the inherent analog capabilities or by tight-coupling logic and memory. However, allowing the effective use of a PIM device demands significant and costly modifications on the host processor to support instructions offloading, cache coherence, virtual memory management, and communication between different PIM instances. This paper tackles these challenges by presenting a set of solutions to couple host and PIM with no modifications at host side. Moreover, we highlight the limitations presented on modern host processors that may prevent full performance extraction of the PIM devices. This work presents Plug N’ PIM, a set of strategies and procedures to seamlessly couple host general-purpose processors and PIM devices. We show that with our techniques one can exploit the benefits of a PIM device with seamless integration between host and PIM, bypassing possible limitations on the host side.
更多
查看译文
关键词
Processing-in-Memory,Code offloading,Cache coherence,System integration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要