OpenSD: Unified Open-Vocabulary Segmentation and Detection
CoRR(2023)
摘要
Recently, a few open-vocabulary methods have been proposed by employing a
unified architecture to tackle generic segmentation and detection tasks.
However, their performance still lags behind the task-specific models due to
the conflict between different tasks, and their open-vocabulary capability is
limited due to the inadequate use of CLIP. To address these challenges, we
present a universal transformer-based framework, abbreviated as OpenSD, which
utilizes the same architecture and network parameters to handle open-vocabulary
segmentation and detection tasks. First, we introduce a decoder decoupled
learning strategy to alleviate the semantic conflict between thing and staff
categories so that each individual task can be learned more effectively under
the same framework. Second, to better leverage CLIP for end-to-end segmentation
and detection, we propose dual classifiers to handle the in-vocabulary domain
and out-of-vocabulary domain, respectively. The text encoder is further trained
to be region-aware for both thing and stuff categories through decoupled prompt
learning, enabling them to filter out duplicated and low-quality predictions,
which is important to end-to-end segmentation and detection. Extensive
experiments are conducted on multiple datasets under various circumstances. The
results demonstrate that OpenSD outperforms state-of-the-art open-vocabulary
segmentation and detection methods in both closed- and open-vocabulary
settings. Code is available at https://github.com/strongwolf/OpenSD
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要