DualContrast: Unsupervised Disentangling of Content and Transformations with Implicit Parameterization

CoRR(2024)

引用 0|浏览0
暂无评分
摘要
Unsupervised disentanglement of content and transformation has recently drawn much research, given their efficacy in solving downstream unsupervised tasks like clustering, alignment, and shape analysis. This problem is particularly important for analyzing shape-focused real-world scientific image datasets, given their significant relevance to downstream tasks. The existing works address the problem by explicitly parameterizing the transformation factors, significantly reducing their expressiveness. Moreover, they are not applicable in cases where transformations can not be readily parametrized. An alternative to such explicit approaches is self-supervised methods with data augmentation, which implicitly disentangles transformations and content. We demonstrate that the existing self-supervised methods with data augmentation result in the poor disentanglement of content and transformations in real-world scenarios. Therefore, we developed a novel self-supervised method, DualContrast, specifically for unsupervised disentanglement of content and transformations in shape-focused image datasets. Our extensive experiments showcase the superiority of DualContrast over existing self-supervised and explicit parameterization approaches. We leveraged DualContrast to disentangle protein identities and protein conformations in cellular 3D protein images. Moreover, we also disentangled transformations in MNIST, viewpoint in the Linemod Object dataset, and human movement deformation in the Starmen dataset as transformations using DualContrast.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要