Griffon: Spelling out All Object Locations at Any Granularity with Large Language Models.
CoRR(2023)
摘要
Replicating the innate human ability to detect all objects based on free-form
texts at any granularity remains a formidable challenge for Vision-Language
models. Current Large Vision Language Models (LVLMs) are predominantly
constrained to grounding a single, pre-existing object, relying solely on data
from Referring Expression Comprehension tasks. The limitation leads to a
compromise in model design, necessitating the introduction of visual expert
models or the integration of customized head structures. Beyond these
constraints, our research delves into the untapped potential of LVLMs and
uncover their inherent capability for basic object perception, allowing them to
accurately identify and locate objects of interest. Building on this insight,
we introduce a novel language-prompted localization dataset designed to fully
unleash the capabilities of LVLMs in integrating fine-grained object perception
with precise location awareness. More importantly, we present
$\textbf{Griffon}$, a purely LVLM-based baseline, which does not require the
introduction of any special tokens, expert models, or additional detection
modules. It simply maintains a consistent structure with popular LVLMs by
unifying data formats across various localization-related scenarios and is
trained end-to-end through a well-designed pipeline. Comprehensive experiments
demonstrate that $\textbf{Griffon}$ not only achieves state-of-the-art
performance on the fine-grained RefCOCO series but also approaches the
capabilities of the expert model Faster RCNN on the detection benchmark MSCOCO.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要