A Configurable and Efficient Memory Hierarchy for Neural Network Hardware Accelerator
arxiv(2024)
摘要
As machine learning applications continue to evolve, the demand for efficient
hardware accelerators, specifically tailored for deep neural networks (DNNs),
becomes increasingly vital. In this paper, we propose a configurable memory
hierarchy framework tailored for per layer adaptive memory access patterns of
DNNs. The hierarchy requests data on-demand from the off-chip memory to provide
it to the accelerator's compute units. The objective is to strike an optimized
balance between minimizing the required memory capacity and maintaining high
accelerator performance. The framework is characterized by its configurability,
allowing the creation of a tailored memory hierarchy with up to five levels.
Furthermore, the framework incorporates an optional shift register as final
level to increase the flexibility of the memory management process. A
comprehensive loop-nest analysis of DNN layers shows that the framework can
efficiently execute the access patterns of most loop unrolls. Synthesis results
and a case study of the DNN accelerator UltraTrail indicate a possible
reduction in chip area of up to 62.2
the same time, the performance loss can be minimized to 2.4
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要