Watch, Listen and Tell: Multi-modal Weakly Supervised Dense Event Captioning

2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)(2019)

引用 91|浏览2
暂无评分
摘要
Multi-modal learning, particularly among imaging and linguistic modalities, has made amazing strides in many high-level fundamental visual understanding problems, ranging from language grounding to dense event captioning. However, much of the research has been limited to approaches that either do not take audio corresponding to video into account at all, or those that model the audio-visual correlations in service of sound or sound source localization. In this paper, we present the evidence, that audio signals can carry surprising amount of information when it comes to high-level visual-lingual tasks. Specifically, we focus on the problem of weakly-supervised dense event captioning in videos and show that audio on its own can nearly rival performance of a state-of-the-art visual model and, combined with video, can improve on the state-of-the-art performance. Extensive experiments on the ActivityNet Captions dataset show that our proposed multi-modal approach outperforms state-of-the-art unimodal methods, as well as validate specific feature representation and architecture design choices.
更多
查看译文
关键词
language grounding,audio-visual correlations,audio signals,high-level visual-lingual tasks,weakly-supervised dense event captioning,state-of-the-art visual model,ActivityNet Captions dataset,multimodal approach,multimodal weakly supervised dense event captioning,multimodal learning,linguistic modalities,amazing strides,high-level fundamental visual understanding problems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要