Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 1720|浏览196
暂无评分
摘要
Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as "the" and "of". Other words that may seem visual can often be predicted reliably just from the language model e.g., "sign" after "behind a red stop" or "phone" following "talking on a cell". In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.
更多
查看译文
关键词
visual sentinel,neural encoder-decoder frameworks,visual information,language model,sequential word generation,adaptive attention model,COCO image captioning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要