GENERATIVE INFORMATION FUSION

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)(2021)

引用 2|浏览4
暂无评分
摘要
In this work, we demonstrate the ability to exploit sensing modalities for mitigating an unrepresented modality or for potentially re-targeting resources. This is tantamount to developing proxy sensing capabilities for multi-modal learning. In classical fusion, multiple sensors are required to capture different information about the same target. Maintaining and collecting samples from multiple sensors can be financially demanding. Additionally, the effort necessary to ensure a logical mapping between the modalities may be prohibitively limiting. We examine the scenario where we have access to all modalities during training, but only a single modality at testing. In our approach, we initialize the parameters of our single modality inference network with weights learned from the fusion of multiple modalities through both classification and GANs losses. Our experiments show that emulating a multimodal system by perturbing a single modality with noise can help us achieve competitive results compared to using multiple modalities.
更多
查看译文
关键词
multimodal fusion, remote sensing, gans
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要