Mixture-of-Subspaces in Low-Rank Adaptation
CoRR(2024)
Abstract
In this paper, we introduce a subspace-inspired Low-Rank Adaptation (LoRA)
method, which is computationally efficient, easy to implement, and readily
applicable to large language, multimodal, and diffusion models. Initially, we
equivalently decompose the weights of LoRA into two subspaces, and find that
simply mixing them can enhance performance. To study such a phenomenon, we
revisit it through a fine-grained subspace lens, showing that such modification
is equivalent to employing a fixed mixer to fuse the subspaces. To be more
flexible, we jointly learn the mixer with the original LoRA weights, and term
the method Mixture-of-Subspaces LoRA (MoSLoRA). MoSLoRA consistently
outperforms LoRA on tasks in different modalities, including commonsense
reasoning, visual instruction tuning, and subject-driven text-to-image
generation, demonstrating its effectiveness and robustness. Codes are available
at \href{https://github.com/wutaiqiang/MoSLoRA}{github}.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined