CREST: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning
CoRR(2024)
Abstract
Zero-shot learning (ZSL) enables the recognition of novel classes by
leveraging semantic knowledge transfer from known to unknown categories. This
knowledge, typically encapsulated in attribute descriptions, aids in
identifying class-specific visual features, thus facilitating visual-semantic
alignment and improving ZSL performance. However, real-world challenges such as
distribution imbalances and attribute co-occurrence among instances often
hinder the discernment of local variances in images, a problem exacerbated by
the scarcity of fine-grained, region-specific attribute annotations. Moreover,
the variability in visual presentation within categories can also skew
attribute-category associations. In response, we propose a bidirectional
cross-modal ZSL approach CREST. It begins by extracting representations for
attribute and visual localization and employs Evidential Deep Learning (EDL) to
measure underlying epistemic uncertainty, thereby enhancing the model's
resilience against hard negatives. CREST incorporates dual learning pathways,
focusing on both visual-category and attribute-category alignments, to ensure
robust correlation between latent and observable spaces. Moreover, we introduce
an uncertainty-informed cross-modal fusion technique to refine visual-attribute
inference. Extensive experiments demonstrate our model's effectiveness and
unique explainability across multiple datasets. Our code and data are available
at: Comments: Ongoing work; 10 pages, 2 Tables, 9 Figures; Repo is available at
https://github.com/JethroJames/CREST.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined