Towards Learning and Explaining Indirect Causal Effects in Neural Networks

Abbavaram Gowtham Reddy,Saketh Bachu, Harsharaj Pathak, Benin Godfrey L,Varshaneya V,Vineeth N Balasubramanian, Satyanarayan Kar

AAAI 2024(2024)

引用 0|浏览6
暂无评分
摘要
Recently, there has been a growing interest in learning and explaining causal effects within Neural Network (NN) models. By virtue of NN architectures, previous approaches consider only direct and total causal effects assuming independence among input variables. We view an NN as a structural causal model (SCM) and extend our focus to include indirect causal effects by introducing feedforward connections among input neurons. We propose an ante-hoc method that captures and maintains direct, indirect, and total causal effects during NN model training. We also propose an algorithm for quantifying learned causal effects in an NN model and efficient approximation strategies for quantifying causal effects in high-dimensional data. Extensive experiments conducted on synthetic and real-world datasets demonstrate that the causal effects learned by our ante-hoc method better approximate the ground truth effects compared to existing methods.
更多
查看译文
关键词
ML: Deep Learning Algorithms,ML: Causal Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要