Chrome Extension
WeChat Mini Program
Use on ChatGLM

A globally optimal algorithm for TTD-MDPs

AAMAS(2007)

Cited 26|Views2
No score
Abstract
In this paper, we discuss the use of Targeted Trajectory Distribution Markov Decision Processes (TTD-MDPs)---a variant of MDPs in which the goal is to realize a specified distribution of trajectories through a state space---as a general agent-coordination framework. We present several advances to previous work on TTD-MDPs. We improve on the existing algorithm for solving TTD-MDPs by deriving a greedy algorithm that finds a policy that provably minimizes the global KL-divergence from the target distribution. We test the new algorithm by applying TTD-MDPs to drama management, where a system must coordinate the behavior of many agents to ensure that a game follows a coherent storyline, is in keeping with the author's desires, and offers a high degree of replayability. Although we show that suboptimal greedy strategies will fail in some cases, we validate previous work that suggests that they can work well in practice. We also show that our new algorithm provides guaranteed accuracy even in those cases, with little additional computational cost. Further, we illustrate how this new approach can be applied online, eliminating the memory-intensive offline sampling necessary in the previous approach.
More
Translated text
Key words
markov decision processes,target distribution,existing algorithm,optimal algorithm,greedy algorithm,new algorithm,specified distribution,previous work,new approach,suboptimal greedy strategy,previous approach,global optimization,convex optimization,markov decision process,state space
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined