Audio-Visual Generalized Zero-Shot Learning using Pre-Trained Large Multi-Modal Models
CoRR(2024)
摘要
Audio-visual zero-shot learning methods commonly build on features extracted
from pre-trained models, e.g. video or audio classification models. However,
existing benchmarks predate the popularization of large multi-modal models,
such as CLIP and CLAP. In this work, we explore such large pre-trained models
to obtain features, i.e. CLIP for visual features, and CLAP for audio features.
Furthermore, the CLIP and CLAP text encoders provide class label embeddings
which are combined to boost the performance of the system. We propose a simple
yet effective model that only relies on feed-forward neural networks,
exploiting the strong generalization capabilities of the new audio, visual and
textual features. Our framework achieves state-of-the-art performance on
VGGSound-GZSL, UCF-GZSL, and ActivityNet-GZSL with our new features. Code and
data available at: https://github.com/dkurzend/ClipClap-GZSL.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要