Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization

PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021(2021)

引用 55|浏览47
暂无评分
摘要
Domain generalization aims to enhance the model robustness against domain shift without accessing the target domain. Since the number of available domains is limited during training, recent approaches focus on generating samples of novel domains. Nevertheless, they struggle with optimization when synthesizing abundant domains, or cause distortion of original semantics. To these ends, we propose a novel domain generalization framework where feature statistics are utilized to transfer the original features to ones with novel domain properties. To preserve original semantics before stylization, we first decompose features into high and low-frequency components. Afterward, we stylize the texture cues in low-frequency components with the novel domain styles sampled from the manipulated statistics, while preserving the shape cues in high-frequency components. As a final step, we re-merge the components to synthesize novel domain features. To enhance domain robustness, we utilize the stylized features to maintain the model consistency in terms of features as well as outputs. We achieve the feature consistency with the novel domain-aware supervised contrastive loss which ensures domain invariance while increasing class discriminability. Moreover, we enhance the output consistency by exploiting the consistency loss which minimizes the disagreement between outputs. Experimental results demonstrate the effectiveness of the proposed feature stylization and losses. Through quantitative comparisons, we verify the lead of our method upon existing state-of-the-art methods on two benchmarks, PACS and Office-Home.
更多
查看译文
关键词
Domain Generalization,Deep learning,Image Classification,Feature Stylization,Contrastive Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要