Multi-Sensory Visual-Auditory Fusion of Wearable Navigation Assistance for People with Impaired Vision.

2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)(2023)

引用 0|浏览2
暂无评分
摘要
Navigating independently is a challenge for visually impaired vision due to the demand of obstacles avoiding, recognizing desired objects, and wayfinding in complicated environments. In this paper, we present an augmented wearable E-Glasses with a set of sensors, where an object detection neural network based on visual-auditory fusion method is employed to search desired targets, thus addressing navigation challenges and improving the mobility and independence of the visually impaired. We demonstrate advanced navigation capabilities: indoor wayfinding, recognizing and steering the users to desired goals, and a sequence of indoor challenges. The fusion network adopts a feature-level fusion strategy, which is capable to align two modalities automatically and effectively integrate visual features and audio features. Across all experiments, the developed fusion algorithm has a 94.67% success rate. The wearable E-Glasses supply a platform that helps to improve the mobility and quality of life of people with impaired vision.
更多
查看译文
关键词
Visual Impairment,Neural Network,Object Detection,Visual Features,Fusion Method,Fusion Network,Fusion Algorithm,Set Of Sensors,Feature-level Fusion,Convolutional Neural Network,Series Of Experiments,Feature Maps,Wearable Devices,Actual Experiment,Walking Speed,Light Detection And Ranging,Local Map,Inertial Measurement Unit,Residual Block,Depth Camera,Simultaneous Localization And Mapping,Obstacle Avoidance,Navigation Algorithm,Loop Closure,COCO Dataset,Spatial Attention Mechanism,Bundle Adjustment,Loop Detection,Camera Pose,Global Positioning System
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要