Implementing and Optimizing the Scaled Dot-Product Attention on Streaming Dataflow
CoRR(2024)
Abstract
Transformer models serve as the backbone of many state-ofthe-art language
models, and most use the scaled dot-product attention (SDPA) mechanism to
capture relationships between tokens. However, the straightforward
implementation of SDPA has quadratic compute and memory complexity with respect
to the sequence length. On processor architectures such as GPUs and TPUs, there
is a robust body of prior work. However, little work has been performed on
non-processor architectures.In this work, we show how the architecture and
execution model of Streaming Dataflow Accelerators can help tackle this
challenge. We first define abstract hardware that adopts a streaming execution
model, and we implement a cycle-accurate simulator of the abstract hardware
using the Dataflow Abstract Machine simulation framework. Second, we implement
the naive SDPA algorithm on this abstract hardware and show it requires linear
(O(N)) intermediate memory. Third, we then modify the naive algorithm, taking
inspiration from prior processor-oriented works, by reordering the
multiplication and division operations. Finally, we map the modified algorithm
to abstract hardware, and confirm that the implementation computes SDPA at full
throughput while only using a constant amount (O(1)) of intermediate memory.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined