Event-based Video Frame Interpolation with Edge Guided Motion Refinement
arxiv(2024)
摘要
Video frame interpolation, the process of synthesizing intermediate frames
between sequential video frames, has made remarkable progress with the use of
event cameras. These sensors, with microsecond-level temporal resolution, fill
information gaps between frames by providing precise motion cues. However,
contemporary Event-Based Video Frame Interpolation (E-VFI) techniques often
neglect the fact that event data primarily supply high-confidence features at
scene edges during multi-modal feature fusion, thereby diminishing the role of
event signals in optical flow (OF) estimation and warping refinement. To
address this overlooked aspect, we introduce an end-to-end E-VFI learning
method (referred to as EGMR) to efficiently utilize edge features from event
signals for motion flow and warping enhancement. Our method incorporates an
Edge Guided Attentive (EGA) module, which rectifies estimated video motion
through attentive aggregation based on the local correlation of multi-modal
features in a coarse-to-fine strategy. Moreover, given that event data can
provide accurate visual references at scene edges between consecutive frames,
we introduce a learned visibility map derived from event data to adaptively
mitigate the occlusion problem in the warping refinement process. Extensive
experiments on both synthetic and real datasets show the effectiveness of the
proposed approach, demonstrating its potential for higher quality video frame
interpolation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要