Denoising Diffusion Probabilistic Models for Action-Conditioned 3D Motion Generation.
IEEE International Conference on Acoustics, Speech, and Signal Processing(2024)
Abstract
Diffusion-based generative models have proven to be highly effective in various domains of synthesis. In this work, we propose a conditional paradigm utilizing the denoising diffusion probabilistic model (DDPM) to address the challenge of realistic and diverse action-conditioned 3D skeleton-based motion generation. The proposed method leverages bidirectional Markov chains to generate samples by inferring the reversed Markov chain based on the learned distribution mapping during the forward diffusion process. To the best of our knowledge, our work is the first to employ DDPM to synthesize a variable number of motion sequences conditioned on a categorical action. The proposed method is evaluated on the NTU RGB+D dataset and the NTU RGB+D two-person dataset, showing significant improvements over state-of-the-art motion generation methods.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined