谷歌浏览器插件
订阅小程序
在清言上使用

Multimodal interaction: Input-output modality combinations for identification tasks in augmented reality

Applied Ergonomics(2022)

引用 1|浏览33
暂无评分
摘要
Multimodal interaction (MMI) is being widely implemented, especially in new technologies such as augmented reality (AR) systems since it is presumed to support a more natural, efficient, and flexible form of interaction. However, limited research has been done to investigate the proper application of MMI in AR. More specifically, the effects of combining different input and output modalities during MMI in AR are still not fully understood. Therefore, this study aims to examine the independent and combined effects of different input and output modalities during a typical AR task. 20 young adults participated in a controlled experiment in which they were asked to perform a simple identification task using an AR device in different input (speech, gesture, multimodal) and output (VV-VA, VV-NA, NV-VA, NV-NA) conditions. Results showed that there were differences in the influence of input and output modalities on task performance, workload, perceived appropriateness, and user preference. Interaction effects between the input and output conditions on the performance metrics were also evident in this study, suggesting that although multimodal input is generally preferred by the users, it should be implemented with caution since its effectiveness is highly influenced by the processing code of the system output. This study, which is the first of its kind, has revealed several new implications regarding the application of MMI in AR systems.
更多
查看译文
关键词
Multimodal interaction,Sensory modalities,Processing codes,Modality combination,Augmented reality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要