谷歌浏览器插件
订阅小程序
在清言上使用

VLMEvalKit: an Open-Source Toolkit for Evaluating Large Multi-Modality Models

Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang,Lin Chen,Yuan Liu,Xiaoyi Dong,Yuhang Zang,Pan Zhang,Jiaqi Wang,Dahua Lin,Kai Chen

arxiv(2024)

引用 0|浏览2
暂无评分
摘要
We present VLMEvalKit: an open-source toolkit for evaluating large multi-modality models based on PyTorch. The toolkit aims to provide a user-friendly and comprehensive framework for researchers and developers to evaluate existing multi-modality models and publish reproducible evaluation results. In VLMEvalKit, we implement over 70 different large multi-modality models, including both proprietary APIs and open-source models, as well as more than 20 different multi-modal benchmarks. By implementing a single interface, new models can be easily added to the toolkit, while the toolkit automatically handles the remaining workloads, including data preparation, distributed inference, prediction post-processing, and metric calculation. Although the toolkit is currently mainly used for evaluating large vision-language models, its design is compatible with future updates that incorporate additional modalities, such as audio and video. Based on the evaluation results obtained with the toolkit, we host OpenVLM Leaderboard, a comprehensive leaderboard to track the progress of multi-modality learning research. The toolkit is released at https://github.com/open-compass/VLMEvalKit and is actively maintained.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要