Double reuses based residual network

Qian Liu, Yixiong Zhong

Neurocomputing(2024)

Cited 0|Views0
No score
Abstract
Deep residual network (ResNet) had shown remarkable performance in image recognition tasks, due to its shorter connections between layers close to the input and those close to the output. And densely connected convolutional network (DenseNet) had further improved recognition performance by dense feature reuses. To improve the performance of residual units in ResNet and feature reuses in DenseNet, we propose a simple and effective convolutional network architecture, named double reuses based residual network (DRRNet). DRRNet improves the residual unit of ResNet, where the feature reuse connections are added to combine all feature maps from the convolutional layers to produce the residual, and uses a residual reuse path outside units to reuse all residuals as the final feature maps for classification. Residual learning used in DRRNet can alleviate the vanishing-gradient problem. The double reuses including inner-unit feature reuses and outer-unit residual reuses effectively decrease computational cost as compared with dense connections in DenseNet, and further strengthen the forward feature propagation. DRRNet is evaluated on three object recognition benchmark datasets and an object detection dataset. In comparison with the state-of-the-art, DRRNet achieves a good balance between classification accuracy and computational cost, and optimal detection performance.
More
Translated text
Key words
Deep convolutional neural network,Feature reuse,Residual reuse,Deep residual network,Densely connected convolutional network
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined