UniCode: Learning a Unified Codebook for Multimodal Large Language Models
arxiv(2024)
摘要
In this paper, we propose UniCode, a novel approach within the
domain of multimodal large language models (MLLMs) that learns a unified
codebook to efficiently tokenize visual, text, and potentially other types of
signals. This innovation addresses a critical limitation in existing MLLMs:
their reliance on a text-only codebook, which restricts MLLM's ability to
generate images and texts in a multimodal context. Towards this end, we propose
a language-driven iterative training paradigm, coupled with an in-context
pre-training task we term “image decompression”, enabling our model to
interpret compressed visual data and generate high-quality images.The unified
codebook empowers our model to extend visual instruction tuning to
non-linguistic generation tasks. Moreover, UniCode is adaptable to diverse
stacked quantization approaches in order to compress visual signals into a more
compact token representation. Despite using significantly fewer parameters and
less data during training, Unicode demonstrates promising capabilities in
visual reconstruction and generation. It also achieves performances comparable
to leading MLLMs across a spectrum of VQA benchmarks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要