What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning

Caleb Schultz Kisby,Saúl A. Blanco,Lawrence S. Moss

AAAI 2024(2024)

引用 0|浏览0
暂无评分
摘要
This paper is a contribution to neural network semantics, a foundational framework for neuro-symbolic AI. The key insight of this theory is that logical operators can be mapped to operators on neural network states. In this paper, we do this for a neural network learning operator. We map a dynamic operator [φ] to iterated Hebbian learning, a simple learning policy that updates a neural network by repeatedly applying Hebb's learning rule until the net reaches a fixed-point. Our main result is that we can "translate away" [φ]-formulas via reduction axioms. This means that completeness for the logic of iterated Hebbian learning follows from completeness of the base logic. These reduction axioms also provide (1) a human-interpretable description of iterated Hebbian learning as a kind of plausibility upgrade, and (2) an approach to building neural networks with guarantees on what they can learn.
更多
查看译文
关键词
ML: Neuro-Symbolic Learning,PEAI: Philosophical Foundations of AI,KRR: Reasoning with Beliefs,ML: Transparent, Interpretable, Explainable ML,KRR: Nonmonotonic Reasoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要