Bayesian Prompt Learning for Image-Language Model Generalization.

Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)(2023)

引用 4|浏览31
暂无评分
摘要
Foundational image-language models have generated considerable interest due to their efficient adaptation to downstream tasks by prompt learning. Prompt learning treats part of the language model input as trainable while freezing the rest, and optimizes an Empirical Risk Mini-mization objective. However, Empirical Risk Minimization is known to suffer from distributional shifts which hurt gen-eralizability to prompts unseen during training. By leveraging the regularization ability of Bayesian methods, we frame prompt learning from the Bayesian perspective and formulate it as a variational inference problem. Our approach regularizes the prompt space, reduces overfitting to the seen prompts and improves the prompt generalization on unseen prompts. Our framework is implemented by modeling the input prompt space in a probabilistic manner, as an a priori distribution which makes our proposal compatible with prompt learning approaches that are unconditional or conditional on the image. We demonstrate empirically on 15 benchmarks that Bayesian prompt learning provides an appropriate coverage of the prompt space, prevents learning spurious features, and exploits transferable invariant features. This results in better generalization of unseen prompts, even across different datasets and domains.Code available at: https://github.com/saic-fi/Bayesian-Prompt-Learning
更多
查看译文
关键词
generalization,learning,model,image-language
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要