Harnessing Diffusion Models for Visual Perception with Meta Prompts
CoRR(2023)
摘要
The issue of generative pretraining for vision models has persisted as a
long-standing conundrum. At present, the text-to-image (T2I) diffusion model
demonstrates remarkable proficiency in generating high-definition images
matching textual inputs, a feat made possible through its pre-training on
large-scale image-text pairs. This leads to a natural inquiry: can diffusion
models be utilized to tackle visual perception tasks? In this paper, we propose
a simple yet effective scheme to harness a diffusion model for visual
perception tasks. Our key insight is to introduce learnable embeddings (meta
prompts) to the pre-trained diffusion models to extract proper features for
perception. The effect of meta prompts are two-fold. First, as a direct
replacement of the text embeddings in the T2I models, it can activate
task-relevant features during feature extraction. Second, it will be used to
re-arrange the extracted features to ensures that the model focuses on the most
pertinent features for the task on hand. Additionally, we design a recurrent
refinement training strategy that fully leverages the property of diffusion
models, thereby yielding stronger visual features. Extensive experiments across
various benchmarks validate the effectiveness of our approach. Our approach
achieves new performance records in depth estimation tasks on NYU depth V2 and
KITTI, and in semantic segmentation task on CityScapes. Concurrently, the
proposed method attains results comparable to the current state-of-the-art in
semantic segmentation on ADE20K and pose estimation on COCO datasets, further
exemplifying its robustness and versatility.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要