Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
CoRR(2023)
摘要
In many machine learning systems that jointly learn from multiple modalities,
a core research question is to understand the nature of multimodal
interactions: how modalities combine to provide new task-relevant information
that was not present in either alone. We study this challenge of interaction
quantification in a semi-supervised setting with only labeled unimodal data and
naturally co-occurring multimodal data (e.g., unlabeled images and captions,
video and corresponding audio) but when labeling them is time-consuming. Using
a precise information-theoretic definition of interactions, our key
contribution is the derivation of lower and upper bounds to quantify the amount
of multimodal interactions in this semi-supervised setting. We propose two
lower bounds: one based on the shared information between modalities and the
other based on disagreement between separately trained unimodal classifiers,
and derive an upper bound through connections to approximate algorithms for
min-entropy couplings. We validate these estimated bounds and show how they
accurately track true interactions. Finally, we show how these theoretical
results can be used to estimate multimodal model performance, guide data
collection, and select appropriate multimodal models for various tasks.
更多查看译文
关键词
multimodal learning,labeled multimodal data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要