Chrome Extension
WeChat Mini Program
Use on ChatGLM

The synergy of double attention: Combine sentence-level and word-level attention for image captioning

Computer Vision and Image Understanding(2020)

Cited 21|Views8
No score
Abstract
The existing attention models of image captioning typically extract only word-level attention information, i.e., the attention mechanism extracts local attention information from the image to generate the current word, and lacks accurate image global information guidance. In this paper, we first propose an image captioning approach based on self-attention. Sentence-level attention information is extracted from the image through self-attention mechanism to represent the global image information needed to generate sentences. Furthermore, we propose a double attention model which combines sentence-level attention model with word-level attention model to generate more accurate captions. We implement supervision and optimization in the intermediate stage of the model to solve information interference problems. In addition, we perform two-stage training with reinforcement learning to optimize the evaluation metric of the model. Finally, we evaluated our model on three standard datasets, i.e., Flickr8k, Flickr30k and MSCOCO. Experimental results show that our double attention model can generate more accurate and richer captions, and outperforms many state-of-the-art image captioning approaches in various evaluation metrics.
More
Translated text
Key words
41A05,41A10,65D05,65D17
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined