GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
arxiv(2024)
摘要
Training Large Language Models (LLMs) presents significant memory challenges,
predominantly due to the growing size of weights and optimizer states. Common
memory-reduction approaches, such as low-rank adaptation (LoRA), add a
trainable low-rank matrix to the frozen pre-trained weight in each layer,
reducing trainable parameters and optimizer states. However, such approaches
typically underperform training with full-rank weights in both pre-training and
fine-tuning stages since they limit the parameter search to a low-rank subspace
and alter the training dynamics, and further, may require full-rank warm start.
In this work, we propose Gradient Low-Rank Projection (GaLore), a training
strategy that allows full-parameter learning but is more memory-efficient than
common low-rank adaptation methods such as LoRA. Our approach reduces memory
usage by up to 65.5
performance for pre-training on LLaMA 1B and 7B architectures with C4 dataset
with up to 19.7B tokens, and on fine-tuning RoBERTa on GLUE tasks. Our 8-bit
GaLore further reduces optimizer memory by up to 82.5
memory by 63.3
first time, the feasibility of pre-training a 7B model on consumer GPUs with
24GB memory (e.g., NVIDIA RTX 4090) without model parallel, checkpointing, or
offloading strategies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要