Normalizing flows for conditional independence testing

KNOWLEDGE AND INFORMATION SYSTEMS(2023)

引用 0|浏览0
暂无评分
摘要
Detecting conditional independencies plays a key role in several statistical and machine learning tasks, especially in causal discovery algorithms, yet it remains a highly challenging problem due to dimensionality and complex relationships presented in data. In this study, we introduce LCIT (Latent representation-based Conditional Independence Test)-a novel method for conditional independence testing based on representation learning. Our main contribution involves a hypothesis testing framework in which to test for the independence between X and Y given Z, we first learn to infer the latent representations of target variables X and Y that contain no information about the conditioning variable Z. The latent variables are then investigated for any significant remaining dependencies, which can be performed using a conventional correlation test. Moreover, LCIT can also handle discrete and mixed-type data in general by converting discrete variables into the continuous domain via variational dequantization. The empirical evaluations show that LCIT outperforms several state-of-the-art baselines consistently under different evaluation metrics, and is able to adapt really well to both nonlinear, high-dimensional, and mixed data settings on a diverse collection of synthetic and real data sets.
更多
查看译文
关键词
Conditional independence,Hypothesis testing,Representation learning,Generative models,Normalizing flows,Mixed data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要