Characterizing Video Question Answering with Sparsified Inputs
CoRR(2023)
摘要
In Video Question Answering, videos are often processed as a full-length
sequence of frames to ensure minimal loss of information. Recent works have
demonstrated evidence that sparse video inputs are sufficient to maintain high
performance. However, they usually discuss the case of single frame selection.
In our work, we extend the setting to multiple number of inputs and other
modalities. We characterize the task with different input sparsity and provide
a tool for doing that. Specifically, we use a Gumbel-based learnable selection
module to adaptively select the best inputs for the final task. In this way, we
experiment over public VideoQA benchmarks and provide analysis on how
sparsified inputs affect the performance. From our experiments, we have
observed only 5.2%-5.8% loss of performance with only 10% of video lengths,
which corresponds to 2-4 frames selected from each video. Meanwhile, we also
observed the complimentary behaviour between visual and textual inputs, even
under highly sparsified settings, suggesting the potential of improving data
efficiency for video-and-language tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要