Opportunistic Qualitative Planning in Stochastic Systems with Incomplete Preferences over Reachability Objectives

2023 AMERICAN CONTROL CONFERENCE, ACC(2023)

引用 0|浏览1
暂无评分
摘要
Preferences play a key role in determining what goals/constraints to satisfy when not all constraints can be satisfied simultaneously. In this paper, we study how to synthesize preference-satisfying plans in a stochastic system modeled as an MDP, given a (possibly incomplete) combinative preference model over temporally extended goals. We start by introducing new semantics to interpret preferences over infinite plays of the stochastic system. Then, we introduce a new notion of 'improvement' to enable comparison between two prefixes of an infinite play. Based on this, we define two solution concepts called Safe and Positively Improving (SPI) and Safe and Almost-Sure Improving (SASI) that enforce improvements with a positive probability and with probability one, respectively. We construct a model called an improvement MDP, in which the synthesis of SPI and SASI strategies that guarantee at least one improvement, reduces to computing positive and almost-sure winning strategies in an MDP. We present an algorithm to synthesize the SPI and SASI strategies that induce multiple sequential improvements. We demonstrate the proposed approach using a robot motion planning problem.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要