LVBench: An Extreme Long Video Understanding Benchmark
arxiv(2024)
摘要
Recent progress in multimodal large language models has markedly enhanced the
understanding of short videos (typically under one minute), and several
evaluation datasets have emerged accordingly. However, these advancements fall
short of meeting the demands of real-world applications such as embodied
intelligence for long-term decision-making, in-depth movie reviews and
discussions, and live sports commentary, all of which require comprehension of
long videos spanning several hours. To address this gap, we introduce LVBench,
a benchmark specifically designed for long video understanding. Our dataset
comprises publicly sourced videos and encompasses a diverse set of tasks aimed
at long video comprehension and information extraction. LVBench is designed to
challenge multimodal models to demonstrate long-term memory and extended
comprehension capabilities. Our extensive evaluations reveal that current
multimodal models still underperform on these demanding long video
understanding tasks. Through LVBench, we aim to spur the development of more
advanced models capable of tackling the complexities of long video
comprehension. Our data and code are publicly available at:
https://lvbench.github.io.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要