Free-viewpoint AR human-motion reenactment based on a single RGB-D video stream

Multimedia and Expo(2014)

Cited 3|Views9
No score
Abstract
When observing a person (an actor) performing or demonstrating some activity for the purpose of learning the action, it is best for the viewers to be present at the same time and place as the actor. Otherwise, a video must be recorded. However, conventional video only provides two-dimensional (2D) motion, which lacks the original third dimension of motion. In the presence of some ambiguity, it may be hard for the viewer to comprehend the action with only two dimensions, making it harder to learn the action. This paper proposes an augmented reality system to reenact such actions at any time the viewer wants, in order to aid comprehension of 3D motion. In the proposed system, a user first captures the actor's motion and appearance, using a single RGB-D camera. Upon a viewer's request, our system displays the motion from an arbitrary viewpoint using a rough 3D model of the subject, made up of cylinders, and selecting the most appropriate textures based on the viewpoint and the subject's pose. We evaluate the usefulness of the system and the quality of the displayed images by user study.
More
Translated text
Key words
augmented reality,image colour analysis,image motion analysis,image sensors,image texture,solid modelling,video signal processing,3D motion comprehension,action learning,augmented reality system,free-viewpoint AR human-motion reenactment,rough 3D model,single RGB-D camera,single RGB-D video stream,texture selection,Augmented reality,free-viewpoint image generation,human motion capture
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined