PointSeg: A Training-Free Paradigm for 3D Scene Segmentation via Foundation Models
arxiv(2024)
摘要
Recent success of vision foundation models have shown promising performance
for the 2D perception tasks. However, it is difficult to train a 3D foundation
network directly due to the limited dataset and it remains under explored
whether existing foundation models can be lifted to 3D space seamlessly. In
this paper, we present PointSeg, a novel training-free paradigm that leverages
off-the-shelf vision foundation models to address 3D scene perception tasks.
PointSeg can segment anything in 3D scene by acquiring accurate 3D prompts to
align their corresponding pixels across frames. Concretely, we design a
two-branch prompts learning structure to construct the 3D point-box prompts
pairs, combining with the bidirectional matching strategy for accurate point
and proposal prompts generation. Then, we perform the iterative post-refinement
adaptively when cooperated with different vision foundation models. Moreover,
we design a affinity-aware merging algorithm to improve the final ensemble
masks. PointSeg demonstrates impressive segmentation performance across various
datasets, all without training. Specifically, our approach significantly
surpasses the state-of-the-art specialist model by 13.4%, 11.3%, and
12% mAP on ScanNet, ScanNet++, and KITTI-360 datasets, respectively. On top
of that, PointSeg can incorporate with various segmentation models and even
surpasses the supervised methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要