Efficient parametrization of multi-domain deep neural networks

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(2018)

引用 381|浏览174
暂无评分
摘要
A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain. Recently, inspired by the successes of transfer learning, several authors have proposed to learn instead universal, fixed feature extractors that, used as the first stage of any deep network, work well for several tasks and domains simultaneously. Nevertheless, such universal features are still somewhat inferior to specialized networks. To overcome this limitation, in this paper we propose to consider instead universal parametric families of neural networks, which still contain specialized problem-specific models, but differing only by a small number of parameters. We study different designs for such parametrizations, including series and parallel residual adapters, joint adapter compression, and parameter allocations, and empirically identify the ones that yield the highest compression. We show that, in order to maximize performance, it is necessary to adapt both shallow and deep layers of a deep network, but the required changes are very small. We also show that these universal parametrization are very effective for transfer learning, where they outperform traditional fine-tuning techniques.
更多
查看译文
关键词
parallel residual adapters,joint adapter compression,shallow layers,deep layers,universal parametrization,transfer learning,multidomain deep neural networks,universal feature extractors,neural network universal parametric families,series residual adapters,parameter allocations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要