A Complementary Framework for Human-Robot Collaboration With a Mixed AR-Haptic Interface

IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY(2024)

引用 0|浏览16
暂无评分
摘要
There is invariably a tradeoff between safety and efficiency for collaborative robots (cobots) in human-robot collaborations (HRCs). Robots that interact minimally with humans can work with high speed and accuracy but cannot adapt to new tasks or respond to unforeseen changes, whereas robots that work closely with humans can but only by becoming passive to humans, meaning that their main tasks are suspended and efficiency compromised. Accordingly, this article proposes a new complementary framework for HRC that balances the safety of humans and the efficiency of robots. In this framework, the robot carries out given tasks using a vision-based adaptive controller, and the human expert collaborates with the robot in the null space. Such a decoupling drives the robot to deal with existing issues in task space e.g., uncalibrated camera, limited field of view (FOV) and null space (e.g., joint limits) by itself while allowing the expert to adjust the configuration of the robot body to respond to unforeseen changes (e.g., sudden invasion, change in environment) without affecting the robot's main task. In addition, the robot can simultaneously learn the expert's demonstration in task space and null space beforehand with dynamic movement primitives (DMPs). Therefore, an expert's knowledge and a robot's capability are explored and complement each other. Human demonstration and involvement are enabled via a mixed interaction interface, i.e., augmented reality (AR) and haptic devices. The stability of the closed-loop system is rigorously proved with Lyapunov methods. Experimental results in various scenarios are presented to illustrate the performance of the proposed method.
更多
查看译文
关键词
Collaborative robots,global adaptive control,human demonstration,null-space interaction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要