SmartFRZ: An Efficient Training Framework using Attention-Based Layer Freezing
ICLR(2024)
Abstract
There has been a proliferation of artificial intelligence applications, where
model training is key to promising high-quality services for these
applications. However, the model training process is both time-intensive and
energy-intensive, inevitably affecting the user's demand for application
efficiency. Layer freezing, an efficient model training technique, has been
proposed to improve training efficiency. Although existing layer freezing
methods demonstrate the great potential to reduce model training costs, they
still remain shortcomings such as lacking generalizability and compromised
accuracy. For instance, existing layer freezing methods either require the
freeze configurations to be manually defined before training, which does not
apply to different networks, or use heuristic freezing criteria that is hard to
guarantee decent accuracy in different scenarios. Therefore, there lacks a
generic and smart layer freezing method that can automatically perform
“in-situation” layer freezing for different networks during training
processes. To this end, we propose a generic and efficient training framework
(SmartFRZ). The core proposed technique in SmartFRZ is attention-guided layer
freezing, which can automatically select the appropriate layers to freeze
without compromising accuracy. Experimental results show that SmartFRZ
effectively reduces the amount of computation in training and achieves
significant training acceleration, and outperforms the state-of-the-art layer
freezing approaches.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined