Supplementary Meta-Learning: Towards a Dynamic Model for Deep Neural Networks

2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)(2017)

引用 7|浏览31
暂无评分
摘要
Data diversity in terms of types, styles, as well as radiometric, exposure and texture conditions widely exists in training and test data of vision applications. However, learning in traditional neural networks (NNs) only tries to find a model with fixed parameters that optimize the average behavior over all inputs, without using data-specific properties. In this paper, we develop a meta-level NN (MLNN) model that learns meta-knowledge on data-specific properties of images during learning and that dynamically adapts its weights during application according to the properties of the images input. MLNN consists of two parts: the dynamic supplementary NN (SNN) that learns meta-information on each type of inputs, and the fixed base-level NN (BLNN) that incorporates the meta-information from SNN into its weights at run time to realize the generalization for each type of inputs. We verify our approach using over ten network architectures under various application scenarios and loss functions. In low-level vision applications on image super-resolution and demising, MLNN has 0.1~0.3 dB improvements on PSNR, whereas for high-level image classification, MLNN has accuracy improvement of 0.4~0.6% for Cifar10 and 1.2~2.1% for ImageNet when compared to convolutional NNs (CNNs). Improvements are more pronounced as the scale or diversity of data is increased.
更多
查看译文
关键词
supplementary meta-learning,dynamic model,deep neural networks,data diversity,data-specific properties,meta-level NN model,MLNN,meta-knowledge,dynamic supplementary NN,SNN,meta-information,fixed base-level NN,network architectures,low-level vision applications,image super-resolution,high-level image classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要