Multi-Bank On-Chip Memory Management Techniques for CNN Accelerators

IEEE Transactions on Computers(2022)

引用 7|浏览9
暂无评分
摘要
Since off-chip DRAM access affects both performance and power consumption significantly, convolutional neural network (CNN) accelerators commonly aim to maximize data reuse in on-chip memory. By organizing the on-chip memory to multiple banks, we may hide off-chip DRAM access delay by prefetching data to unused banks during computation. When and where to prefetch data and how to reuse the feature ...
更多
查看译文
关键词
System-on-chip,Random access memory,Convolution,Memory management,Delays,Frequency modulation,Prefetching
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要