ARMCHAIR: integrated inverse reinforcement learning and model predictive control for human-robot collaboration
CoRR(2024)
摘要
One of the key issues in human-robot collaboration is the development of
computational models that allow robots to predict and adapt to human behavior.
Much progress has been achieved in developing such models, as well as control
techniques that address the autonomy problems of motion planning and
decision-making in robotics. However, the integration of computational models
of human behavior with such control techniques still poses a major challenge,
resulting in a bottleneck for efficient collaborative human-robot teams. In
this context, we present a novel architecture for human-robot collaboration:
Adaptive Robot Motion for Collaboration with Humans using Adversarial Inverse
Reinforcement learning (ARMCHAIR). Our solution leverages adversarial inverse
reinforcement learning and model predictive control to compute optimal
trajectories and decisions for a mobile multi-robot system that collaborates
with a human in an exploration task. During the mission, ARMCHAIR operates
without human intervention, autonomously identifying the necessity to support
and acting accordingly. Our approach also explicitly addresses the network
connectivity requirement of the human-robot team. Extensive simulation-based
evaluations demonstrate that ARMCHAIR allows a group of robots to safely
support a simulated human in an exploration scenario, preventing collisions and
network disconnections, and improving the overall performance of the task.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要