Chrome Extension
WeChat Mini Program
Use on ChatGLM

Efficient architecture for deep neural networks with heterogeneous sensitivity

Hyunjoong Cho, Jinhyeok Jang, Chanhyeok Lee, Seungjoon Yang

Neural Networks(2021)

Cited 4|Views49
No score
Abstract
In this study, we present a neural network that consists of nodes with heterogeneous sensitivity. Each node in a network is assigned a variable that determines the sensitivity with which it learns to perform a given task. The network is trained via a constrained optimization that maximizes the sparsity of the sensitivity variables while ensuring optimal network performance. As a result, the network learns to perform a given task using only a few sensitive nodes. Insensitive nodes, which are nodes with zero sensitivity, can be removed from a trained network to obtain a computationally efficient network. Removing zero-sensitivity nodes has no effect on the performance of the network because the network has already been trained to perform the task without them. The regularization parameter used to solve the optimization problem was simultaneously found during the training of the networks. To validate our approach, we designed networks with computationally efficient architectures for various tasks such as autoregression, object recognition, facial expression recognition, and object detection using various datasets. In our experiments, the networks designed by our proposed method provided the same or higher performances but with far less computational complexity. (C) 2020 Elsevier Ltd. All rights reserved.
More
Translated text
Key words
Deep neural networks,Efficient architecture,Heterogeneous sensitivity,Constrained optimization,Simultaneous regularization parameter selection
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined