HGRN2: Gated Linear RNNs with State Expansion
arxiv(2024)
摘要
Hierarchically gated linear RNN (HGRN,Qin et al. 2023) has demonstrated
competitive training speed and performance in language modeling, while offering
efficient inference. However, the recurrent state size of HGRN remains
relatively small, which limits its expressiveness.To address this issue,
inspired by linear attention, we introduce a simple outer-product-based state
expansion mechanism so that the recurrent state size can be significantly
enlarged without introducing any additional parameters. The linear attention
form also allows for hardware-efficient training.Our extensive experiments
verify the advantage of HGRN2 over HGRN1 in language modeling, image
classification, and Long Range Arena.Our largest 3B HGRN2 model slightly
outperforms Mamba and LLaMa Architecture Transformer for language modeling in a
controlled experiment setting; and performs competitively with many open-source
3B models in downstream evaluation while using much fewer total training
tokens.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要