Chrome Extension
WeChat Mini Program
Use on ChatGLM

Channel redundancy and overlap in convolutional neural networks with channel-wise nnk graphs

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

Cited 6|Views24
No score
Abstract
Feature spaces in the deep layers of convolutional neural networks (CNNs) are often very high-dimensional and difficult to interpret. However, convolutional layers consist of multiple channels that are activated by different types of inputs, which suggests that more insights may be gained by studying the channels and how they relate to each other. In this paper, we first analyze theoretically channel-wise non-negative kernel (CW-NNK) regression graphs, which allow us to quantify the overlap between channels and, indirectly, the intrinsic dimension of the data representation manifold. We find that redundancy between channels is significant and varies with the layer depth and the level of regularization during training. Additionally, we observe that there is a correlation between channel overlap in the last convolutional layer and generalization performance. Our experimental results demonstrate that these techniques can lead to a better understanding of deep representations.
More
Translated text
Key words
Convolutional neural networks,channel redundancy,graph construction,intrinsic dimension,interpretability
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined