Improving Utilization of Dataflow Unit for Multi-Batch Processing

ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION(2024)

Cited 0|Views13
No score
Abstract
Dataflow architectures can achieve much better performance and higher efficiency than general-purpose core, approaching the performance of a specialized design while retaining programmability. However, advanced application scenarios place higher demands on the hardware in terms of cross-domain and multi-batch processing. In this article, we propose a unified scale-vector architecture that can work in multiple modes and adapt to diverse algorithms and requirements efficiently. First, a novel reconfigurable interconnection structure is proposed, which can organize execution units into different cluster typologies as a way to accommodate different data-level parallelism. Second, we decouple threads within each DFG node into consecutive pipeline stages and provide architectural support. By time-multiplexing during these stages, dataflow hardware can achieve much higher utilization and performance. In addition, the task-based program model can also exploit multi-level parallelism and deploy applications efficiently. Evaluated in a wide range of benchmarks, including digital signal processing algorithms, CNNs, and scientific computing algorithms, our design attains up to 11.95x energy efficiency (performance-per-watt) improvement over GPU (V100), and 2.01x energy efficiency improvement over state-of-the-art dataflow architectures.
More
Translated text
Key words
Utilization,network-on-chip,decoupled architecture,batch processing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined