Sample, estimate, aggregate: A recipe for causal discovery foundation models
CoRR(2024)
摘要
Causal discovery, the task of inferring causal structure from data, promises
to accelerate scientific research, inform policy making, and more. However, the
per-dataset nature of existing causal discovery algorithms renders them slow,
data hungry, and brittle. Inspired by foundation models, we propose a causal
discovery framework where a deep learning model is pretrained to resolve
predictions from classical discovery algorithms run over smaller subsets of
variables. This method is enabled by the observations that the outputs from
classical algorithms are fast to compute for small problems, informative of
(marginal) data structure, and their structure outputs as objects remain
comparable across datasets. Our method achieves state-of-the-art performance on
synthetic and realistic datasets, generalizes to data generating mechanisms not
seen during training, and offers inference speeds that are orders of magnitude
faster than existing models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要