A Vision Check-up for Language Models
CoRR(2024)
摘要
What does learning to model relationships between strings teach large
language models (LLMs) about the visual world? We systematically evaluate LLMs'
abilities to generate and recognize an assortment of visual concepts of
increasing complexity and then demonstrate how a preliminary visual
representation learning system can be trained using models of text. As language
models lack the ability to consume or output visual information as pixels, we
use code to represent images in our study. Although LLM-generated images do not
look like natural images, results on image generation and the ability of models
to correct these generated images indicate that precise modeling of strings can
teach language models about numerous aspects of the visual world. Furthermore,
experiments on self-supervised visual representation learning, utilizing images
generated with text models, highlight the potential to train vision models
capable of making semantic assessments of natural images using just LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要