Regret-Minimization Algorithms for Multi-Agent Cooperative Learning Systems
CoRR(2023)
摘要
A Multi-Agent Cooperative Learning (MACL) system is an artificial
intelligence (AI) system where multiple learning agents work together to
complete a common task. Recent empirical success of MACL systems in various
domains (e.g. traffic control, cloud computing, robotics) has sparked active
research into the design and analysis of MACL systems for sequential decision
making problems. One important metric of the learning algorithm for decision
making problems is its regret, i.e. the difference between the highest
achievable reward and the actual reward that the algorithm gains. The design
and development of a MACL system with low-regret learning algorithms can create
huge economic values. In this thesis, I analyze MACL systems for different
sequential decision making problems. Concretely, the Chapter 3 and 4
investigate the cooperative multi-agent multi-armed bandit problems, with
full-information or bandit feedback, in which multiple learning agents can
exchange their information through a communication network and the agents can
only observe the rewards of the actions they choose. Chapter 5 considers the
communication-regret trade-off for online convex optimization in the
distributed setting. Chapter 6 discusses how to form high-productive teams for
agents based on their unknown but fixed types using adaptive incremental
matchings. For the above problems, I present the regret lower bounds for
feasible learning algorithms and provide the efficient algorithms to achieve
this bound. The regret bounds I present in Chapter 3, 4 and 5 quantify how the
regret depends on the connectivity of the communication network and the
communication delay, thus giving useful guidance on design of the communication
protocol in MACL systems
更多查看译文
关键词
learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要