Fusing surveillance videos and three-dimensional scene: A mixed reality system

COMPUTER ANIMATION AND VIRTUAL WORLDS(2023)

Cited 1|Views17
No score
Abstract
Augmented Virtual Environments (AVE) or Virtual-Reality Fusion systems fuse dynamic videos with static three-dimensional (3D) models of a virtual environment to provide an optimal solution for visualizing and understanding multichannel surveillance systems. However, texture distortion caused by viewpoint changes in such systems is a critical issue that needs to be addressed. To minimize texture fusion distortion, this paper presents a novel virtual environment system in two phases, offline and online phases, to dynamically fuse multiple surveillance videos with a virtual 3D scene. In the offline phase, a static virtual environment is obtained by performing a 3D photogrammetric reconstruction from the input images of the scene. In the online phase, the virtual environment is augmented by fusing multiple videos through two optional strategies. One strategy is to dynamically map images of different videos onto a 3D model of the virtual environment, and the other is to extract moving objects and represent them as billboards. The system can be used to visualize a 3D environment from any viewpoint augmented by real-time videos. Experiments and user studies in different scenarios demonstrate the superiority of our system.
More
Translated text
Key words
augmented virtual environments,video fusion,video surveillance,virtual environments,virtual-reality fusion
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined