SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 1994|浏览263
暂无评分
摘要
Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism --- a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.
更多
查看译文
关键词
convolutional networks,structural prediction tasks,visual captioning,question answering,visual attention models,spatial probabilities,conv-layer feature map,CNN encoding,spatial attention,attention mechanism,dynamic feature extractor,CNN features,multilayer feature maps,attentive spatial locations,attentive channels,SCA-CNN architecture,image captioning methods,convolutional neural network,image captioning datasets,channel-wise attention,contextual fixations,Flickr30K,Flickr8K,MSCOCO
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要