A Comprehensive Memory Management Framework for CPU-FPGA Heterogenous SoCs

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2023)

引用 1|浏览11
暂无评分
摘要
Efficient utilization of restrained memory resources is of paramount importance in CPU-FPGA heterogeneous multiprocessor system-on-chip (HMPSoC)-based system design for memory-intensive applications. State-of-the-art high level synthesis (HLS) tools rely on the system programmers to manually determine the data placement within the complex memory hierarchy. Different data placement policies may lead to different system performance, and finding an optimal data placement policy is a nontrivial problem. For instance, we show counterintuitive results that traditional frequency and locality-based data placement strategy designed for CPU architecture leads to nonoptimal system performance in CPU-FPGA HMPSoCs. In this work, we first propose an automatic data placement framework for field programmable gate array (FPGA) kernels to determine whether each array object should be accessed via the on-chip BRAM, shared CPU L2-cache, or DDR memory to achieve the optimal performance. Moreover, we find that when the CPU kernel and the FPGA kernel are executed in parallel, memory contentions may degrade the performance and the optimal data placement policy designed for the FPGA kernel alone will not achieve the optimal overall system performance. In this article, we proposed to use cache partitioning to alleviate the impact brought by memory contentions. We extend the framework designed for FPGA by adding the cross-layer memory contentions analysis to automatically generate an optimal data placement policy and cache partitioning mechanism for the parallel executing kernels. The proposed data placement framework can be seamlessly integrated with the commercial Vivado HLS. The experimental results on the Zedboard platform show an average 1.5x performance speedup for FPGA kernels compared with a greedy-based allocation strategy. When FPGA kernels and CPU kernels are executed in parallel, the FPGA kernel and the CPU kernel have a performance speedup of 1.62x and 1.10x on average, respectively.
更多
查看译文
关键词
CPU-FPGA heterogeneous multiprocessor system-on-chip (HMPSoC),data placement,memory architecture,shared resource contention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要