Tri-Modal Motion Retrieval by Learning a Joint Embedding Space
CVPR 2024(2024)
摘要
Information retrieval is an ever-evolving and crucial research domain. The
substantial demand for high-quality human motion data especially in online
acquirement has led to a surge in human motion research works. Prior works have
mainly concentrated on dual-modality learning, such as text and motion tasks,
but three-modality learning has been rarely explored. Intuitively, an extra
introduced modality can enrich a model's application scenario, and more
importantly, an adequate choice of the extra modality can also act as an
intermediary and enhance the alignment between the other two disparate
modalities. In this work, we introduce LAVIMO (LAnguage-VIdeo-MOtion
alignment), a novel framework for three-modality learning integrating
human-centric videos as an additional modality, thereby effectively bridging
the gap between text and motion. Moreover, our approach leverages a specially
designed attention mechanism to foster enhanced alignment and synergistic
effects among text, video, and motion modalities. Empirically, our results on
the HumanML3D and KIT-ML datasets show that LAVIMO achieves state-of-the-art
performance in various motion-related cross-modal retrieval tasks, including
text-to-motion, motion-to-text, video-to-motion and motion-to-video.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要