Exploring Explicitly Disentangled Features for Domain Generalization

IEEE Transactions on Circuits and Systems for Video Technology(2023)

引用 0|浏览0
暂无评分
摘要
Domain generalization (DG) is a challenging task that aims to train a robust model with only labeled source data and can generalize well on unseen target data. The domain gap between the source and target data may degrade the performance. A plethora of methods resort to obtaining domain-invariant features to overcome the difficulties. However, these methods require sophisticated network designs or training strategies, causing inefficiency and complexity. In this paper, we first analyze and reclassify the features into two categories, i.e., implicitly disentangled ones and explicitly disentangled counterparts. Since we aim to design a generic algorithm for DG to alleviate the problems mentioned above, we focus more on the explicitly disentangled features due to their simplicity and interpretability. We find out that the shape features of images are simple and elegant choices based on our analysis. We extract the shape features from two aspects. In the aspect of networks, we propose Multi-Scale Amplitude Mixing (MSAM) to strengthen shape features at different layers of the network by Fourier transform. In the aspect of inputs, we propose a new data augmentation method called Random Shape Warping (RSW) to facilitate the model to concentrate more on the global structures of the objects. RSW randomly distorts the local parts of the images and keeps the global structures unchanged, which can further improve the robustness of the model. Our methods are simple yet efficient and can be conveniently used as plug-and-play modules. They can outperform state-of-the-art (SOTA) methods without bells and whistles.
更多
查看译文
关键词
Domain generalization,feature disentanglement,Fourier transform,data augmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要