On the difficulty of modular reinforcement learning for real-world partial programming

AAAI(2006)

引用 37|浏览18
暂无评分
摘要
In recent years there has been a great deal of interest in "modular reinforcement learning" (MRL). Typically, problems are decomposed into concurrent subgoals, allowing increased scalability and state abstraction. An arbitrator combines the subagents' preferences to select an action. In this work, we contrast treating an MRL agent as a set of subagents with the same goal with treating an MRL agent as a set of subagents who may have different, possibly conflicting goals. We argue that the latter is a more realistic description of real-world problems, especially when building partial programs. We address a range of algorithms for single-goal MRL, and leveraging social choice theory, we present an impossibility result for applications of such algorithms to multigoal MRL. We suggest an alternative formulation of arbitration as scheduling that avoids the assumptions of comparability of preference that are implicit in single-goal MRL. A notable feature of this formulation is the explicit codification of the tradeoffs between the subproblems. Finally, we introduce A2BL, a language that encapsulates many of these ideas.
更多
查看译文
关键词
great deal,concurrent subgoals,explicit codification,alternative formulation,impossibility result,single-goal mrl,mrl agent,modular reinforcement learning,increased scalability,real-world partial programming,conflicting goal,reinforcement learning,social choice theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要