Bringing Masked Autoencoders Explicit Contrastive Properties for Point Cloud Self-Supervised Learning
arxiv(2024)
Abstract
Contrastive learning (CL) for Vision Transformers (ViTs) in image domains has
achieved performance comparable to CL for traditional convolutional backbones.
However, in 3D point cloud pretraining with ViTs, masked autoencoder (MAE)
modeling remains dominant. This raises the question: Can we take the best of
both worlds? To answer this question, we first empirically validate that
integrating MAE-based point cloud pre-training with the standard contrastive
learning paradigm, even with meticulous design, can lead to a decrease in
performance. To address this limitation, we reintroduce CL into the MAE-based
point cloud pre-training paradigm by leveraging the inherent contrastive
properties of MAE. Specifically, rather than relying on extensive data
augmentation as commonly used in the image domain, we randomly mask the input
tokens twice to generate contrastive input pairs. Subsequently, a
weight-sharing encoder and two identically structured decoders are utilized to
perform masked token reconstruction. Additionally, we propose that for an input
token masked by both masks simultaneously, the reconstructed features should be
as similar as possible. This naturally establishes an explicit contrastive
constraint within the generative MAE-based pre-training paradigm, resulting in
our proposed method, Point-CMAE. Consequently, Point-CMAE effectively enhances
the representation quality and transfer performance compared to its MAE
counterpart. Experimental evaluations across various downstream applications,
including classification, part segmentation, and few-shot learning, demonstrate
the efficacy of our framework in surpassing state-of-the-art techniques under
standard ViTs and single-modal settings. The source code and trained models are
available at: https://github.com/Amazingren/Point-CMAE.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined