Multimodal Infusion Tuning for Large Models
CoRR(2024)
摘要
Recent advancements in large-scale models have showcased remarkable
generalization capabilities in various tasks. However, integrating multimodal
processing into these models presents a significant challenge, as it often
comes with a high computational burden. To address this challenge, we introduce
a new parameter-efficient multimodal tuning strategy for large models in this
paper, referred to as Multimodal Infusion Tuning (MiT). MiT leverages decoupled
self-attention mechanisms within large language models to effectively integrate
information from diverse modalities such as images and acoustics. In MiT, we
also design a novel adaptive rescaling strategy at the head level, which
optimizes the representation of infused multimodal features. Notably, all
foundation models are kept frozen during the tuning process to reduce the
computational burden(only 2.5% parameters are tunable). We conduct experiments
across a range of multimodal tasks, including image-related tasks like
referring segmentation and non-image tasks such as sentiment analysis. Our
results showcase that MiT achieves state-of-the-art performance in multimodal
understanding while significantly reducing computational overhead(10% of
previous methods). Moreover, our tuned model exhibits robust reasoning
abilities even in complex scenarios.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要