DNNMapper: An Elastic Framework for Mapping DNNs to Multi-die FPGAs.

Shuyang Li, Xilang Zhou,Haodong Lu,Kun Wang

IEEE International Symposium on Circuits and Systems(2024)

Cited 0|Views0
No score
Abstract
Deep Neural Networks (DNNs) have stimulated intensive FPGA-based acceleration solutions, and multi-die FPGAs offer abundant resources for implementing large-scale DNN workloads. However, current FPGA frameworks overlook the opportunities of optimization on multi-die FPGAs. In this paper, we propose an automated framework named DNNMapper, for mapping DNNs to multi-die FPGAs. With careful consideration of the unique architectural characteristics and resource constraints of multi-die FPGAs, DNNMapper involves model partitioning and resource allocation as two critical processes that map DNN layers onto respective FPGA dies and efficiently allocate hardware resources. DNNMapper employs a co-design engine based on a genetic algorithm, which co-optimizes model partitioning and resource allocation. Experimental results demonstrate that accelerators generated by DNNMapper offer superior performance and scalability, achieving up to 2× higher throughput and 1.3× to 1.9× higher DSP density. Moreover, our accelerator demonstrates a frequency improvement from 1.28× to 1.69×.
More
Translated text
Key words
Multi-die FPGAs,DNN Accelerator,High Level Synthesis
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined