VideoMatt: A Simple Baseline for Accessible Real-Time Video Matting.

CVPR Workshops(2023)

引用 3|浏览59
暂无评分
摘要
Recently, real-time video matting has received growing attention from academia and industry as a new research area on the rise. However, most current state-of-the-art solutions are trained and evaluated on private or inaccessible matting datasets, which makes it hard for future researchers to conduct fair comparisons among different models. Moreover, most methods are built upon image matting models with various tricks across frames to boost matting quality. For real-time video matting models, simple and effective temporal modeling methods must be explored better. As a result, we first composite a new video matting benchmark that is purely based on publicly accessible datasets for training and testing. We further empirically investigate various temporal modeling methods and compare their performance in matting accuracy and inference speed. We name our method as VideoMatt: a simple and strong real-time video matting baseline model based on a newly-composited accessible benchmark. Extensive experiments show that our VideoMatt variants reach better trade-offs between inference speed and matting quality compared with other state-of-the-art methods for real-time trimap-free video matting. We release the VideoMatt benchmark at https://drive.google.com/file/d/1QT4KHeGW3YrtBs1_7zovdCwCAofQ_GIj/view?usp=sharing.
更多
查看译文
关键词
accessible benchmark,baseline model,effective temporal modeling methods,image matting models,inaccessible matting datasets,matting quality,private matting datasets,publicly accessible datasets,real-time trimap-free video matting,real-time video matting models,simple baseline,state-of-the-art solutions,video matting benchmark,VideoMatt
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要