Identifiable Latent Neural Causal Models
arxiv(2024)
摘要
Causal representation learning seeks to uncover latent, high-level causal
representations from low-level observed data. It is particularly good at
predictions under unseen distribution shifts, because these shifts can
generally be interpreted as consequences of interventions. Hence leveraging
seen distribution shifts becomes a natural strategy to help identifying
causal representations, which in turn benefits predictions where distributions
are previously unseen. Determining the types (or conditions) of such
distribution shifts that do contribute to the identifiability of causal
representations is critical. This work establishes a sufficient and
necessary condition characterizing the types of distribution shifts for
identifiability in the context of latent additive noise models. Furthermore, we
present partial identifiability results when only a portion of distribution
shifts meets the condition. In addition, we extend our findings to latent
post-nonlinear causal models. We translate our findings into a practical
algorithm, allowing for the acquisition of reliable latent causal
representations. Our algorithm, guided by our underlying theory, has
demonstrated outstanding performance across a diverse range of synthetic and
real-world datasets. The empirical observations align closely with the
theoretical findings, affirming the robustness and effectiveness of our
approach.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要