Pruning Large Language Models to Intra-module Low-rank Architecture with Transitional Activations
arxiv(2024)
Abstract
Structured pruning fundamentally reduces computational and memory overheads
of large language models (LLMs) and offers a feasible solution for end-side LLM
deployment. Structurally pruned models remain dense and high-precision, highly
compatible with further tuning and compression. However, as the coarse-grained
structured pruning poses large damage to the highly interconnected model,
achieving a high compression ratio for scaled-up LLMs remains a challenge. In
this paper, we introduce a task-agnostic structured pruning approach coupled
with a compact Transformer architecture design. The proposed approach, named
TransAct, reduces transitional activations inside multi-head attention (MHA)
and multi-layer perceptron (MLP) modules, while preserving the inter-module
activations that are sensitive to perturbations. Hence, the LLM is pruned into
an intra-module low-rank architecture, significantly reducing weights, KV Cache
and attention computation. TransAct is implemented on the LLaMA model and
evaluated on downstream benchmarks. Results verify the optimality of our
approach at high compression with respect to both efficiency and performance.
Further, ablation studies reveal the strength of activation-guided iterative
pruning and provide experimental analysis on the redundancy of MHA and MLP
modules.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined