Meta-Point Learning and Refining for Category-Agnostic Pose Estimation
CVPR 2024(2024)
摘要
Category-agnostic pose estimation (CAPE) aims to predict keypoints for
arbitrary classes given a few support images annotated with keypoints. Existing
methods only rely on the features extracted at support keypoints to predict or
refine the keypoints on query image, but a few support feature vectors are
local and inadequate for CAPE. Considering that human can quickly perceive
potential keypoints of arbitrary objects, we propose a novel framework for CAPE
based on such potential keypoints (named as meta-points). Specifically, we
maintain learnable embeddings to capture inherent information of various
keypoints, which interact with image feature maps to produce meta-points
without any support. The produced meta-points could serve as meaningful
potential keypoints for CAPE. Due to the inevitable gap between inherency and
annotation, we finally utilize the identities and details offered by support
keypoints to assign and refine meta-points to desired keypoints in query image.
In addition, we propose a progressive deformable point decoder and a slacked
regression loss for better prediction and supervision. Our novel framework not
only reveals the inherency of keypoints but also outperforms existing methods
of CAPE. Comprehensive experiments and in-depth studies on large-scale MP-100
dataset demonstrate the effectiveness of our framework.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要