Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations
CoRR(2024)
摘要
Large-scale recommendation systems are characterized by their reliance on
high cardinality, heterogeneous features and the need to handle tens of
billions of user actions on a daily basis. Despite being trained on huge volume
of data with thousands of features, most Deep Learning Recommendation Models
(DLRMs) in industry fail to scale with compute.
Inspired by success achieved by Transformers in language and vision domains,
we revisit fundamental design choices in recommendation systems. We reformulate
recommendation problems as sequential transduction tasks within a generative
modeling framework (“Generative Recommenders”), and propose a new
architecture, HSTU, designed for high cardinality, non-stationary streaming
recommendation data.
HSTU outperforms baselines over synthetic and public datasets by up to 65.8%
in NDCG, and is 5.3x to 15.2x faster than FlashAttention2-based Transformers on
8192 length sequences. HSTU-based Generative Recommenders, with 1.5 trillion
parameters, improve metrics in online A/B tests by 12.4% and have been
deployed on multiple surfaces of a large internet platform with billions of
users. More importantly, the model quality of Generative Recommenders
empirically scales as a power-law of training compute across three orders of
magnitude, up to GPT-3/LLaMa-2 scale, which reduces carbon footprint needed for
future model developments, and further paves the way for the first foundational
models in recommendations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要