X-VILA: Cross-Modality Alignment for Large Language Model
CoRR(2024)
摘要
We introduce X-VILA, an omni-modality model designed to extend the
capabilities of large language models (LLMs) by incorporating image, video, and
audio modalities. By aligning modality-specific encoders with LLM inputs and
diffusion decoders with LLM outputs, X-VILA achieves cross-modality
understanding, reasoning, and generation. To facilitate this cross-modality
alignment, we curate an effective interleaved any-to-any modality
instruction-following dataset. Furthermore, we identify a significant problem
with the current cross-modality alignment method, which results in visual
information loss. To address the issue, we propose a visual alignment mechanism
with a visual embedding highway module. We then introduce a resource-efficient
recipe for training X-VILA, that exhibits proficiency in any-to-any modality
conversation, surpassing previous approaches by large margins. X-VILA also
showcases emergent properties across modalities even in the absence of similar
training data. The project will be made open-source.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要