Integration of artificial vision with non-visual peripheral cues to guide mobility.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)(2022)

引用 0|浏览5
暂无评分
摘要
Visual prostheses can improve vision for people with severe vision loss, but low image resolution and lack of peripheral vision limit their effectiveness. To address both problems, we developed a prototype advanced video processing system with a headworn depth camera and feature detection capabilities. We used computer vision algorithms to detect landmarks representing a goal and plan a path towards the goal, while removing unnecessary distractors from the video. If the landmark fell outside the visual prosthesis's field-of-view (20 degrees central vision) but within the camera's field-of-view (70 degrees), we provided vibrational cues to the left or right temple to guide the user in pointing the camera. We evaluated an Argus II retinal prosthesis participant with significant vision loss who could not complete the task (finding a door in a large room) with either his remaining vision or his retinal prosthesis. His success rate improved to 57%, 37.5%, and 100% while requiring 52.3, 83.0, and 58.8 seconds to reach the door using only vibration feedback, retinal prosthesis with modified video, and retinal prosthesis with modified video and vibration feedback, respectively. This case study demonstrates a possible means of augmenting artificial vision. Clinical Relevance- Retinal prostheses can be enhanced by adding computer vision and non-visual cues.
更多
查看译文
关键词
Algorithms,Cues,Humans,Vision Disorders,Visual Fields,Visual Perception,Visual Prosthesis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要