Unlocking the Global Synergies in Low-Rank Adapters
arxiv(2024)
Abstract
Low-rank Adaption (LoRA) has been the de-facto parameter-efficient
fine-tuning technique for large language models. We present HeteroLoRA, a
light-weight search algorithm that leverages zero-cost proxies to allocate the
limited LoRA trainable parameters across the model for better fine-tuned
performance. In addition to the allocation for the standard LoRA-adapted
models, we also demonstrate the efficacy of HeteroLoRA by performing the
allocation in a more challenging search space that includes LoRA modules and
LoRA-adapted shortcut connections. Experiments show that HeteroLoRA enables
improvements in model performance given the same parameter budge. For example,
on MRPC, we see an improvement of 1.6
parameter budget. We will open-source our algorithm once the paper is accepted.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined