Temporal Moment Localization via Natural Language by Utilizing Video Question Answers as a Special Variant and Bypassing NLP for Corpora

IEEE Transactions on Circuits and Systems for Video Technology(2022)

引用 5|浏览19
暂无评分
摘要
Temporal moment localization using natural language (TMLNL) is an emerging issue in computer vision for localizing a specific moment inside a long, untrimmed video. The goal of TMLNL is to obtain the video’s output moment, which is related to the input query in a substantial way. Previous research focused on the visual portion of TMLNL, such as objects, backdrops, and other visual attributes, but natural language processing (NLP) techniques were largely used for the textual portion. A long query requires sufficient context to properly localize moments within a long untrimmed video. Thus, as a consequence of not completely understanding how to handle queries, performances deteriorated, especially when the query was longer. In this paper, we treat the TMLNL challenge as a unique variation of VQA, which equally considers the visual elements by using our proposed VQA joint visual-textual framework (JVTF). However, we also manage complex and long input queries without employing natural language processing (NLP) by improving poorly graded to finely graded distinct granularity representations. Our suggested BCPN searches for insufficient context for long input queries using an approach called query handler (QH) and helps the JVTF find the most relevant moment. Previously, a recurrence of words was caused by increasing the number of encoding layers in transformers, LSTMs, and other NLP techniques; however, our QH ensured that repetition of word locations was reduced. The output of BCPN is combined with JVTF’s guided attention to further improve the end outcome. Therefore, we propose a novel bidirectional context predictor network (BCPN), in addition to a VQA joint visual-textual framework (JVTF), to address the equal importance of videos and queries. Through extensive experiments on three benchmark datasets, we show that the proposed BCPN outperforms the state-of-the-art methods by $IoU = 0.3 (2.65 \%) $ , $IoU = 0.5 (2.49 \%)$ , and $IoU = 0.7 (2.06 \%) $ .
更多
查看译文
关键词
Moment retrieval,video grounding,activity detection via language,moment localization using language
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要