CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization
CoRR(2024)
摘要
Training large AI models such as deep learning recommendation systems and
foundation language (or multi-modal) models costs massive GPUs and computing
time. The high training cost has become only affordable to big tech companies,
meanwhile also causing increasing concerns about the environmental impact. This
paper presents CoMERA, a Computing- and Memory-Efficient training method via
Rank-Adaptive tensor optimization. CoMERA achieves end-to-end rank-adaptive
tensor-compressed training via a multi-objective optimization formulation, and
improves the training to provide both a high compression ratio and excellent
accuracy in the training process. Our optimized numerical computation (e.g.,
optimized tensorized embedding and tensor-vector contractions) and GPU
implementation eliminate part of the run-time overhead in the tensorized
training on GPU. This leads to, for the first time, 2-3× speedup per
training epoch compared with standard training. CoMERA also outperforms the
recent GaLore in terms of both memory and computing efficiency. Specifically,
CoMERA is 2× faster per training epoch and 9× more
memory-efficient than GaLore on a tested six-encoder transformer with
single-batch training. With further HPC optimization, CoMERA may significantly
reduce the training cost of large language models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要