Supervision dropout: guidance learning in deep neural network

Multimedia Tools and Applications(2022)

引用 0|浏览14
暂无评分
摘要
In deep neural networks, the generalization is a vital evaluation metric. As it contributes to avoid over-fitting, Dropout plays an important role in improving the generalization of deep neural networks. Without fully utilizing the training data and the real-time performance of the networks, traditional Dropout and its variants lack of specificity in the selection of inactivated neurons and the planning of dropout rates, resulting in a weaker performance in enhancing the generalization. Therefore, this paper offers an improved Dropout method. As both the training data and the real-time performance of networks can be quantified by the loss, the method uses the loss of the network prediction to guide the selection of inactivated neurons and the determination of dropout rates. The selection is performed by the genetic algorithm, while the results of the selection are used to plan the dropout rate. In essence, this approach encourages the subset of neurons with the higher loss to be trained so as to increase the robustness of neurons and thus improves the generalization of networks. The experimental results demonstrate that the proposed method achieves better generalization on MiniImageNet and Caltech-256 datasets. Compared with the backbone network, the accuracy improves from 66.56% to 72.95%.
更多
查看译文
关键词
Deep neural network,Dropout,Genetic algorithm,Dropout rate
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要