Conditioning, updating and lower probability zero
International Journal of Approximate Reasoning(2015)
摘要
We discuss the issue of conditioning on events with probability zero within an imprecise-probabilistic setting, where it may happen that the conditioning event has lower probability zero, but positive upper probability. In this situation, two different conditioning rules are commonly used: regular extension and natural extension. We explain the difference between them and discuss various technical and computational aspects. Both conditioning rules are often used to update an imprecise belief model after receiving the information that some event O has occurred, simply by conditioning on O, but often little argumentation is given as to why such an approach would make sense. We help to address this problem by providing a firm foundational justification for the use of natural and regular extension as updating rules. Our results are presented in three different, closely related frameworks: sets of desirable gambles, lower previsions, and sets of probabilities. What makes our justification especially powerful is that it avoids making some of the unnecessary strong assumptions that are traditionally adopted. For example, we do not assume that lower and upper probabilities provide bounds on some 'true' probability mass function, on which we can then simply apply Bayes's rule. Instead a subject's lower probability for an event O is taken to be the supremum betting rate at which he is willing to bet on O, and his upper probability is the infimum betting rate at which he is willing to take bets on O; we do not assume the existence of a fair betting rate that lies in between these bounds. We provide an introduction to four different imprecise probability frameworks.We discuss the issue of conditioning on events with (lower) probability zero.We study two specific conditioning rules: natural and regular extension.We justify the use of natural and regular extension as updating rules.Our justifications do not require an assumption of ideal precision.
更多查看译文
关键词
Conditioning,Updating,Probability zero,Regular extension,Natural extension,Sets of desirable gambles
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络