CompA: Addressing the Gap in Compositional Reasoning in Audio-Language Models
arxiv(2023)
Abstract
A fundamental characteristic of audio is its compositional nature.
Audio-language models (ALMs) trained using a contrastive approach (e.g., CLAP)
that learns a shared representation between audio and language modalities have
improved performance in many downstream applications, including zero-shot audio
classification, audio retrieval, etc. However, the ability of these models to
effectively perform compositional reasoning remains largely unexplored and
necessitates additional research. In this paper, we propose CompA, a collection
of two expert-annotated benchmarks with a majority of real-world audio samples,
to evaluate compositional reasoning in ALMs. Our proposed CompA-order evaluates
how well an ALM understands the order or occurrence of acoustic events in
audio, and CompA-attribute evaluates attribute-binding of acoustic events. An
instance from either benchmark consists of two audio-caption pairs, where both
audios have the same acoustic events but with different compositions. An ALM is
evaluated on how well it matches the right audio to the right caption. Using
this benchmark, we first show that current ALMs perform only marginally better
than random chance, thereby struggling with compositional reasoning. Next, we
propose CompA-CLAP, where we fine-tune CLAP using a novel learning method to
improve its compositional reasoning abilities. To train CompA-CLAP, we first
propose improvements to contrastive training with composition-aware hard
negatives, allowing for more focused training. Next, we propose a novel modular
contrastive loss that helps the model learn fine-grained compositional
understanding and overcomes the acute scarcity of openly available
compositional audios. CompA-CLAP significantly improves over all our baseline
models on the CompA benchmark, indicating its superior compositional reasoning
capabilities.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined