A Modular End-to-End Multimodal Learning Method for Structured and Unstructured Data
CoRR(2024)
摘要
Multimodal learning is a rapidly growing research field that has
revolutionized multitasking and generative modeling in AI. While much of the
research has focused on dealing with unstructured data (e.g., language, images,
audio, or video), structured data (e.g., tabular data, time series, or signals)
has received less attention. However, many industry-relevant use cases involve
or can be benefited from both types of data. In this work, we propose a
modular, end-to-end multimodal learning method called MAGNUM, which can
natively handle both structured and unstructured data. MAGNUM is flexible
enough to employ any specialized unimodal module to extract, compress, and fuse
information from all available modalities.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要