Heuristic Construction of Explanations Through Associative Abduction

semanticscholar(2020)

引用 3|浏览0
暂无评分
摘要
Humans regularly explain observations of their environment in terms of background knowledge. This process is better characterized as abduction than as deduction, since it often requires introduction of assumptions about unobserved relations. In this paper, we present a theory of abductive explanation that builds on earlier work but extends it in new directions. The theory distinguishes between definitions and constraints, with the former used to elaborate existing explanations and the latter used to detect and repair inconsistencies. We also describe PENUMBRA, an implemented system that instantiates the theory, and demonstrate its behavior on a number of domains. The system carries out heuristic search through a space of explanations, processing observations incrementally, and generating alternative accounts for a given set of inputs. We conclude by discussing related approaches to explanation, limitations of the implementation, and directions for future research. 1. Background and Motivation One distinctive feature of human cognition is the ability to understand complex situations and events. This invariably involves explaining observations in terms of available knowledge. Moreover, these explanations are typically abductive in character, in that they incorporate plausible assumptions that are neither observed nor derived deductively. Abductive explanation is a general ability that arises in many contexts, from sentence processing and story understanding (Winston, 2012) to scene interpretation and plan recognition (Blaylock & Allen, 2005) to diagnosis (Reggia, Nau, & Wang, 1985). We would like a computational theory of the structures that underlie this capacity and the processes that operate over them. Ultimately, this should contribute to a more comprehensive cognitive architecture (Langley, Laird, & Rogers, 2009) that supports goal-directed activity over time, but here we will focus only on conceptual understanding. Let us consider a simple example. Suppose we are told that Abe possesses some cash and Bob possesses a car, but that later Abe possesses the same car. Although we did not observe any transaction, we can reasonably assume that one took place. Two explanations come immediately to mind. One is that Abe bought the car from Bob using money; another is that Abe stole the car from Bob by threatening him in some way. We also know these two explanations are mutually c © 2019 Cognitive Systems Foundation. All rights reserved. P. LANGLEY AND B. MEADOWS exclusive, in that purchases and robbery are two distinct ways to transfer possession of objects. This means that we must not only introduce plausible assumptions about unobserved events, but consider the competing explanations and keep them separate. Later, we may hear that Abe actually gave money to Bob, eliminating theft as an alternative. More complex examples would involve multi-step inference chains that generate hierarchical accounts of observations. In this paper, we present a cognitive systems account of such abductive explanations. Our analysis draws on standard ideas from the paradigm, including a focus on high-level cognition, the importance of structured representations and knowledge, a reliance on heuristic search, and incorporation of constraints from human behavior, such as incremental processing of observations. The approach that we describe builds directly on two earlier efforts (Bridewell & Langley, 2011; Meadows, Langley, & Emery, 2014) in this area, but extends them to incorporate richer representations and novel reasoning mechanisms. In the next section, we discuss two formulations of the abductive explanation task, along with prior results in each framework, and clarify our reasons for selecting one of them. After this, we describe a new theory for this ability, focusing first on assumptions about cognitive structures and then on processes that inspect and manipulate them. Next we report PENUMBRA, an implemented system that instantiates these theoretical ideas, along with its behavior on multiple scenarios that demonstrate its coverage. We close by discussing links to earlier work, noting limits of the implementation, and proposing directions for additional research in this area. 2. Two Formulations of Abductive Explanation We can define any cognitive task in terms of the information provided as inputs and the content generated as outputs. However, there are often different ways to translate an informal problem into a formal specification. The literature on abductive explanation has explored two distinct statements of this mental task that we should discuss before proceeding further. These treatments are orthogonal to whether inputs are processed incrementally, an important feature of human processing whose discussion we will delay until a later section. The first formulation borrows from classic treatments of abduction in logic and the philosophy of science (Peirce, 1878; Hempel, 1966). We can state it as: • Given: A set of general knowledge elements K (e.g., relational rules) • Given: A set of specific observed facts O (e.g., relational literals) • Find: A set of specific plausible assumptions A (e.g., relational literals) • Find: A set of proof trees P that derive elements of O from A and other elements of O with K The key idea here is that the resulting explanation, a set of linked proof trees, must contain a proof for each observed fact. These may include default assumptions as terminal nodes, which can be shared across different proof trees, but each observation must follow deductively from these assumptions and from other observations by reasoning over available knowledge. Proof trees may correspond to causal chains, as in many scientific explanations, but this is not a requirement. We will refer to this formulation as derivational abduction because observations must be derived from other beliefs. This paradigm has received considerable attention in the AI community. For example, Reggia et al. (1985) adopted the approach in their work on diagnosis, which inferred unobserved diseases that caused observed symptoms, and Hobbs et al. (1993) used it in their ap-
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要