Stage-Wise Magnitude-Based Pruning for Recurrent Neural Networks

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(2024)

Cited 3|Views51
No score
Abstract
A recurrent neural network (RNN) has shown powerful performance in tackling various natural language processing (NLP) tasks, resulting in numerous powerful models containing both RNN neurons and feedforward neurons. On the other hand, the deep structure of RNN has heavily restricted its implementation on mobile devices, where quite a few applications involve NLP tasks. Magnitude-based pruning (MP) is a promising way to address such a challenge. However, the existing MP methods are mostly designed for feedforward neural networks that do not involve a recurrent structure, and, thus, have performed less satisfactorily on pruning models containing RNN layers. In this article, a novel stage-wise MP method is proposed by explicitly taking the featured recurrent structure of RNN into account, which can effectively prune feedforward layers and RNN layers, simultaneously. The connections of neural networks are first grouped into three types according to how they are intersected with recurrent neurons. Then, an optimization-based pruning method is applied to compress each group of connections, respectively. Empirical studies show that the proposed method performs significantly better than the commonly used RNN pruning methods; i.e., up to 96.84% connections are pruned with little or even no degradation of precision indicators on the testing datasets.
More
Translated text
Key words
Computational modeling,Recurrent neural networks,Neurons,Sparse matrices,Task analysis,Hardware,Training,Deep neural network (DNN) compression,language models,machine translation,optimization,recurrent neural networks (RNNs)
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined