Human motion tracking with monocular video by introducing a graph structure into gaussian process dynamical models

ADVANCES IN IMAGE AND VIDEO TECHNOLOGY, PT I(2011)

Cited 1|Views0
No score
Abstract
This paper presents a novel approach to tracking articulated human motion with monocular video. In a conventional tracking system based on particle filters, it is very challenging to track a complex human pose with many degrees of freedom. A typical solution to this problem is to track the pose in a low dimensional latent space by manifold learning techniques, e.g., the Gaussian process dynamical model (GPDM model). In this paper, we extend the GPDM model into a graph structure (called GPDM graph) to better express the diverse dynamics of human motion, where multiple latent spaces are constructed and dynamically connected to each other appropriately by an unsupervised learning method. Basically, the proposed model has both intra-transitions (in each latent space) and inter-transitions (among latent spaces). Moreover, the probability of inter-transition is dynamic, depending on the current latent state. Using the proposed GPDM graph model, we can track human motion with monocular video, where the average tracking errors are improved from the state-of-the-art methods in our experiments.
More
Translated text
Key words
human motion tracking,human motion,proposed gpdm graph model,gpdm model,gaussian process dynamical model,multiple latent space,latent space,graph structure,low dimensional latent space,monocular video,current latent state,manifold learning,motion tracking
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined