Striped Attention: Faster Ring Attention for Causal Transformers.
CoRR(2023)
摘要
To help address the growing demand for ever-longer sequence lengths in
transformer models, Liu et al. recently proposed Ring Attention, an exact
attention algorithm capable of overcoming per-device memory bottle- necks by
distributing self-attention across multiple devices. In this paper, we study
the performance characteristics of Ring Attention in the important special case
of causal transformer models, and identify a key workload imbal- ance due to
triangular structure of causal attention computations. We propose a simple
extension to Ring Attention, which we call Striped Attention to fix this
imbalance. Instead of devices having contiguous subsequences, each device has a
subset of tokens distributed uniformly throughout the sequence, which we
demonstrate leads to more even workloads. In experiments running Striped
Attention on A100 GPUs and TPUv4s, we are able to achieve up to 1.45x
end-to-end throughput improvements over the original Ring Attention algorithm
on causal transformer training at a sequence length of 256k. Furthermore, on 16
TPUv4 chips, we were able to achieve 1.65x speedups at sequence lengths of
786k. We release the code for our experiments as open source
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要