Toward Out-of-Distribution Generalization Through Inductive Biases

Philosophy and Theory of Artificial Intelligence 2021(2022)

引用 0|浏览0
暂无评分
摘要
State-of-the-art Machine Learning systems are able to process and analyze a large amount of data but they still struggle to generalize to out-of-distribution scenarios. To use Judea Pearl’s words, “Data are profoundly dumb” (Pearl & Mackenzie 2018); possessing a model of the world, a representation through which to frame reality is a necessary requirement in order to discriminate between relevant and irrelevant information and to deal with unknown scenarios. The aim of this paper is to address the crucial challenge of out-of-distribution generalization in automated systems by developing an understanding of how human agents build models to act in a dynamic environment. The steps needed to reach this goal are described by Pearl through the metaphor of the Ladder of Causation. In this paper, I support the relevance of inductive biases in order for an agent to reach the second rung on the Ladder: that of actively interacting with the environment.
更多
查看译文
关键词
Inductive biases, Generalization, Decision making, Causality, Hybrid AI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要