Towards an Understanding of Real-time Captioning on Head-worn Displays.

MobileHCI(2020)

引用 0|浏览1
暂无评分
摘要
Automatic speech recognition has made live captioning possible on mobile devices for people with hearing difficulties. In what situations is it advantageous to display the transcription on a head worn display (HWD) versus a mobile phone? Based on iterative design efforts for an on-going user study, we demonstrate the use of HWDs versus a mobile phone for captioning while a participant’s hearing is blocked using noise-canceling headphones. In the past, to compare the efficacy of the Vuzix Blade HWD to a mobile phone, eight participants attempted a toy block assembly task guided by captioning of a live instructor’s speech. The HWD had higher mental, physical and overall workload scores than the phone, potentially due to the blurriness of the HWD’s image. Re-running the experiment with Google Glass Enterprise Edition 2 HWD (above line-of-sight) with another twelve participants resulted in higher mental, effort, frustration, and overall workload scores than the phone. While Glass has a much sharper image than the Blade, the speech recognition quality was significantly worse. However, nine of the twelve participants stated that they would prefer Glass for the task if the speech recognition was better. Current efforts replace live speech recognition with a simulation of perfect recognition and improve display quality in order to more directly compare captioning on a mobile phone, a line-of-sight HWD, and an above line-of-sight HWD.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要