Separating common from salient patterns with Contrastive Representation Learning
ICLR 2024(2024)
摘要
Contrastive Analysis is a sub-field of Representation Learning that aims at
separating common factors of variation between two datasets, a background
(i.e., healthy subjects) and a target (i.e., diseased subjects), from the
salient factors of variation, only present in the target dataset. Despite their
relevance, current models based on Variational Auto-Encoders have shown poor
performance in learning semantically-expressive representations. On the other
hand, Contrastive Representation Learning has shown tremendous performance
leaps in various applications (classification, clustering, etc.). In this work,
we propose to leverage the ability of Contrastive Learning to learn
semantically expressive representations well adapted for Contrastive Analysis.
We reformulate it under the lens of the InfoMax Principle and identify two
Mutual Information terms to maximize and one to minimize. We decompose the
first two terms into an Alignment and a Uniformity term, as commonly done in
Contrastive Learning. Then, we motivate a novel Mutual Information minimization
strategy to prevent information leakage between common and salient
distributions. We validate our method, called SepCLR, on three visual datasets
and three medical datasets, specifically conceived to assess the pattern
separation capability in Contrastive Analysis. Code available at
https://github.com/neurospin-projects/2024_rlouiset_sep_clr.
更多查看译文
关键词
Contrastive Learning,Mutual Information,Contrastive Analysis,Disentanglement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要