Asymmetric slack contrastive learning for full use of feature information in image translation

Yusen Zhang, Min Li,Yao Gou, Yujie He

Knowledge-Based Systems(2024)

Cited 0|Views10
No score
Abstract
Recently, contrastive learning has been proven to be powerful in cross-domain feature learning and has been widely used in image translation tasks. However, these methods often overlook the differences between positive and negative samples regarding model optimization ability and treat them equally. This weakens the feature representation ability of the generative models. In this paper, we propose a novel image translation model based on asymmetric slack contrastive learning. We design a new contrastive loss asymmetrically by introducing a slack adjustment factor. Theoretical analysis shows that it can adaptively optimize and adjust according to different positive and negative samples and significantly improve optimization efficiency. In addition, to better preserve local structural relationships during image translation, we constructed a regional differential structural consistency correction block using differential vectors. Comparative experiments were conducted using six existing methods on five datasets. The results indicate that our method can maintain structural consistency between cross-domain images at a deeper level. Furthermore, it is more effective in establishing real image-domain mapping relations, resulting in higher-quality images being generated.
More
Translated text
Key words
Image translation,Cross-domain learning,Asymmetric slack contrast,Contrastive learning,Structure consistency
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined