Exploring Temporal Consistency in Image-Based Rendering for Immersive Video Transmission

2022 10th European Workshop on Visual Information Processing (EUVIP)(2022)

引用 0|浏览34
暂无评分
摘要
Image-based rendering methods synthesize novel views given input images captured from multiple viewpoints to display free viewpoint immersive video. Despite significant progress with the recent learning-based approaches, there are still some drawbacks. In particular, these approaches operate at the still image level and do not maintain consistency among consecutive time instants, leading to temporal noise. To address this, we propose an intra-only framework to identify regions of input images leading to temporally inconsistent synthesized views. Our method synthesizes better and more stable novel views, even in the most general use case of immersive video transmission. We conclude that the network seems to identify and correct spatial features at the still image level that produce artifacts in the temporal dimension.
更多
查看译文
关键词
image-based rendering,temporal consistency,immersive video transmission
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要