Risk-Averse Allocation Indices for Multiarmed Bandit Problem

IEEE Transactions on Automatic Control(2021)

引用 4|浏览5
暂无评分
摘要
In classical multiarmed bandit problem, the aim is to find a policy maximizing the expected total reward, implicitly assuming that the decision-maker is risk-neutral. On the other hand, the decision-makers are risk-averse in some real-life applications. In this article, we design a new setting based on the concept of dynamic risk measures where the aim is to find a policy with the best risk-adjust...
更多
查看译文
关键词
Markov processes,Indexes,Resource management,Heuristic algorithms,Dynamic scheduling,Routing,Random variables
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要