Disentangled Representation Learning through Geometry Preservation with the Gromov-Monge Gap
arxiv(2024)
Abstract
Learning disentangled representations in an unsupervised manner is a
fundamental challenge in machine learning. Solving it may unlock other
problems, such as generalization, interpretability, or fairness. While
remarkably difficult to solve in general, recent works have shown that
disentanglement is provably achievable under additional assumptions that can
leverage geometrical constraints, such as local isometry. To use these
insights, we propose a novel perspective on disentangled representation
learning built on quadratic optimal transport. Specifically, we formulate the
problem in the Gromov-Monge setting, which seeks isometric mappings between
distributions supported on different spaces. We propose the Gromov-Monge-Gap
(GMG), a regularizer that quantifies the geometry-preservation of an arbitrary
push-forward map between two distributions supported on different spaces. We
demonstrate the effectiveness of GMG regularization for disentanglement on four
standard benchmarks. Moreover, we show that geometry preservation can even
encourage unsupervised disentanglement without the standard reconstruction
objective - making the underlying model decoder-free, and promising a more
practically viable and scalable perspective on unsupervised disentanglement.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined