Incorporating Normalized L1 Penalty and Eigenvalue Constraint for Causal Structure Learning

International Journal on Artificial Intelligence Tools(2023)

引用 0|浏览1
暂无评分
摘要
Inferring causal relationships is key to data science. Learning causal structures in the form of directed acyclic graphs (DAGs) has been widely adopted for uncovering causal relationships, nonetheless, it is a challenging task owing to its exponential search space. A recent approach formulates the structure learning problem as a continuous constrained optimization task that aims to learn causal relation matrix. Following it are nonlinear variants that can uncover nonlinear causal relationships. However, the nonlinear variant which considers the l(1) penalty as part of its optimization objective may not effectively eliminate false predictions. In this paper, we investigate the defect of the model that the l(1) penalty cannot effectively make the relation matrix sparse, thus introduces false predictions. Besides, the acyclicity constraint is unable to identify large circles within the margin of identification error, thus is unable to guarantee acyclicity of inferred causal relationships. Based on the theoretical and empirical analysis of the defects, we propose the normalized l(1) penalty which replaces the original l(1) penalty with a normalized first-order matrix norm, and propose a constraint based on eigenvalue to substitute the original acyclicity constraint. We then compare our proposed model NEC with three models to show considerable performance improvement. We further conduct experiments to show the effectiveness of the normalized l(1) penalty and the eigenvalue constraint.
更多
查看译文
关键词
Causal structure learning,normalized l(1) penalty,eigenvalue constraint,continuous optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要