Rethinking Patch Dependence for Masked Autoencoders
CoRR(2024)
摘要
In this work, we re-examine inter-patch dependencies in the decoding
mechanism of masked autoencoders (MAE). We decompose this decoding mechanism
for masked patch reconstruction in MAE into self-attention and cross-attention.
Our investigations suggest that self-attention between mask patches is not
essential for learning good representations. To this end, we propose a novel
pretraining framework: Cross-Attention Masked Autoencoders (CrossMAE).
CrossMAE's decoder leverages only cross-attention between masked and visible
tokens, with no degradation in downstream performance. This design also enables
decoding only a small subset of mask tokens, boosting efficiency. Furthermore,
each decoder block can now leverage different encoder features, resulting in
improved representation learning. CrossMAE matches MAE in performance with 2.5
to 3.7× less decoding compute. It also surpasses MAE on ImageNet
classification and COCO instance segmentation under the same compute. Code and
models: https://crossmae.github.io
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要