Region-Transformer: Self-Attention Region Based Class-Agnostic Point Cloud Segmentation
International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications(2024)
摘要
Point cloud segmentation, which helps us understand the environment of
specific structures and objects, can be performed in class-specific and
class-agnostic ways. We propose a novel region-based transformer model called
Region-Transformer for performing class-agnostic point cloud segmentation. The
model utilizes a region-growth approach and self-attention mechanism to
iteratively expand or contract a region by adding or removing points. It is
trained on simulated point clouds with instance labels only, avoiding semantic
labels. Attention-based networks have succeeded in many previous methods of
performing point cloud segmentation. However, a region-growth approach with
attention-based networks has yet to be used to explore its performance gain. To
our knowledge, we are the first to use a self-attention mechanism in a
region-growth approach. With the introduction of self-attention to
region-growth that can utilize local contextual information of neighborhood
points, our experiments demonstrate that the Region-Transformer model
outperforms previous class-agnostic and class-specific methods on indoor
datasets regarding clustering metrics. The model generalizes well to
large-scale scenes. Key advantages include capturing long-range dependencies
through self-attention, avoiding the need for semantic labels during training,
and applicability to a variable number of objects. The Region-Transformer model
represents a promising approach for flexible point cloud segmentation with
applications in robotics, digital twinning, and autonomous vehicles.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要