Self-adaptive Cobots in Cyber-Physical Production Systems

2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)(2019)

引用 8|浏览8
暂无评分
摘要
Absolute automation in certain industries, such as the automotive industry, has proven to be disadvantageous. Robots are fairly capable when performing tasks that are repetitive and demand precision. However, a hybrid solution comprised of the adaptability and resourcefulness of humans cooperating, in the same task, with the precision and efficiency of machines is the next step for automation. Manipulators, however, lack self-adaptability and true collaborative behaviour. And so, through the integration of vision systems, manipulators can perceive their environment and also understand complex interactions. In this paper, a vision-based collaborative proof-of-concept framework is proposed using the Kinect v2, a UR5 robotic manipulator and MATLAB. This framework implements 3 behavioural modes, 1) a Self-Adaptive mode for obstacle detection and avoidance, 2) a Collaborative mode for physical human-robot interaction and 3) a standby Safe mode. These modes are activated with recourse to gestures, by virtue of the body tracking and gesture recognition algorithm of the Kinect v2. Additionally, to allow self-recognition of the robot, the Region Growing segmentation is combined with the UR5's Forward Kinematics for precise, near real-time segmentation. Furthermore, self-adaptive reactive behaviour is implemented by using artificial repulsive action for the manipulator's end-effector. Reaction times were tested for all three modes, being that Collaborative and Safe mode would take up to 5 seconds to accomplish the movement, while Self-Adaptive mode could take up to 10 seconds between reactions.
更多
查看译文
关键词
Cobots,Collaborative Robotics,Gesture Control,Industry 4.0,Kinect v2,Obstacle Detection,Potential Fields,Self-adaptive,Universal Robots UR5,Vision-based robot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要