DLWV2: A Deep Learning-Based Wearable Vision-System with Vibrotactile-Feedback for Visually Impaired People to Reach Objects

2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2018)

引用 12|浏览73
暂无评分
摘要
We develop a Deep Learning-based Wearable Vision-system with Vibrotactile-feedback (DLWV2)to guide Blind and Visually Impaired (BVI)people to reach objects. The system achieves high accuracy in object detection and tracking in 3-D using an extended deep learning-based 2.5-D detector and a 3-D object tracker with the ability to track 3-D object locations even outside the camera field-of-view. We train our detector with a large number of images with 2.5-D object ground-truth (i.e., 2-D object bounding boxes and distance from the camera to objects). A novel combination of HTC Vive Tracker with our system enables us to automatically obtain the ground-truth labels for training while requiring very little human effort to set up the system. Moreover, our system processes frames in real-time through a client-server computing platform such that BVI people can receive realtime vibrotactile guidance. We conduct a thorough user study on 12 BVI people in new environments with object instances which are unseen during training. Our system outperforms the non-assistive guiding strategy with statistic significance in both time and the number of contacting irrelevant objects. Finally, the interview with BVI users confirms that our system with distance-based vibrotactile feedback is mostly preferred, especially for objects requiring gentle manipulation such as a bottle with water inside.
更多
查看译文
关键词
BVI people,extended deep learning-based 2.5-D detector,Visually Impaired people,Vibrotactile-feedback,distance-based vibrotactile feedback,2.5-D object ground-truth,3-D object locations,3-D object tracker,object detection,DLWV2,Deep Learning-based Wearable Vision-system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要