From Pixels to Graphs: Open-Vocabulary Scene Graph Generation with Vision-Language Models
arxiv(2024)
摘要
Scene graph generation (SGG) aims to parse a visual scene into an
intermediate graph representation for downstream reasoning tasks. Despite
recent advancements, existing methods struggle to generate scene graphs with
novel visual relation concepts. To address this challenge, we introduce a new
open-vocabulary SGG framework based on sequence generation. Our framework
leverages vision-language pre-trained models (VLM) by incorporating an
image-to-graph generation paradigm. Specifically, we generate scene graph
sequences via image-to-text generation with VLM and then construct scene graphs
from these sequences. By doing so, we harness the strong capabilities of VLM
for open-vocabulary SGG and seamlessly integrate explicit relational modeling
for enhancing the VL tasks. Experimental results demonstrate that our design
not only achieves superior performance with an open vocabulary but also
enhances downstream vision-language task performance through explicit relation
modeling knowledge.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要