Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision

IEEE Robotics and Automation Letters(2021)

引用 47|浏览36
暂无评分
摘要
We present an approach for safe, and object-independent human-to-robot handovers using real time robotic vision, and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm, and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation, and hand/finger segmentation. Pixels that are deemed to belong to the human are filtered out from candidate grasp poses, hence ensuring that the robot safely picks the object without colliding with the human partner. The grasp selection, and perception modules run concurrently in real-time, which allows monitoring of the progress. In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.
更多
查看译文
关键词
Handover,Robot kinematics,Real-time systems,Grasping,Safety
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要