iConn: A Communication Infrastructure for Heterogeneous Computing Architectures

JETC(2015)

引用 4|浏览27
暂无评分
摘要
Recently, the graphics processing unit (GPU) has made significant progress as a general-purpose parallel processor. The CPU and GPU cooperate together to solve data-parallel and control-intensive real-world applications in an optimized fashion. For example, emerging heterogeneous computing architectures such as Intel Sandy Bridge and AMD Fusion integrate the functionality of the CPU and GPU in a single die. However, the single-die CPU-GPU heterogeneous computing architecture faces the challenge of tight budget of die area. The conventional homogenous interconnect fails to provide satisfactory performance by fully exploiting the given area budget in the heterogeneous processing era. In this article, we aim to implement an interconnect network within an area budget for a CPU-GPU heterogeneous computing architecture. We propose iConn, a 2D mesh-style on-chip heterogeneous communication infrastructure. In iConn, a set of GPU logical units such as the stream processors, the texture units, and the rendering output units form a computing unit (CU). Differing from conventional homogenous router design, iConn adopts nonuniform on-chip routers in order to meet the unique communication demands from each single CPU and CU. The routers can also dynamically allocate their buffers across all virtual channels (VCs) to meet the latency requirements of CPUs and CUs. Moreover, the memory controller scheduling algorithm is modified from traditional load-over-store scheduling in order to prioritize the traffic. Our simulation results show that iConn improves the performance of CPUs by 23.0% and CUs by 9.4%.
更多
查看译文
关键词
microcomputers,algorithms,design,gpu,interconnections,cpu,heterogeneous computing,network-on-chip,performance,other architecture styles,network on chip
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要