Codebook Transfer with Part-of-Speech for Vector-Quantized Image Modeling
CVPR 2024(2024)
摘要
Vector-Quantized Image Modeling (VQIM) is a fundamental research problem in
image synthesis, which aims to represent an image with a discrete token
sequence. Existing studies effectively address this problem by learning a
discrete codebook from scratch and in a code-independent manner to quantize
continuous representations into discrete tokens. However, learning a codebook
from scratch and in a code-independent manner is highly challenging, which may
be a key reason causing codebook collapse, i.e., some code vectors can rarely
be optimized without regard to the relationship between codes and good codebook
priors such that die off finally. In this paper, inspired by pretrained
language models, we find that these language models have actually pretrained a
superior codebook via a large number of text corpus, but such information is
rarely exploited in VQIM. To this end, we propose a novel codebook transfer
framework with part-of-speech, called VQCT, which aims to transfer a
well-trained codebook from pretrained language models to VQIM for robust
codebook learning. Specifically, we first introduce a pretrained codebook from
language models and part-of-speech knowledge as priors. Then, we construct a
vision-related codebook with these priors for achieving codebook transfer.
Finally, a novel codebook transfer network is designed to exploit abundant
semantic relationships between codes contained in pretrained codebooks for
robust VQIM codebook learning. Experimental results on four datasets show that
our VQCT method achieves superior VQIM performance over previous
state-of-the-art methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要