Supporting Text Entry in Virtual Reality with Large Language Models.

Liuqing Chen, Yu Cai, Ruyue Wang, Shixian Ding, Yilin Tang, Preben Hansen, Lingyun Sun

IEEE Conference on Virtual Reality and 3D User Interfaces(2024)

引用 0|浏览2
暂无评分
摘要
Text entry in virtual reality (VR) often faces challenges in terms of efficiency and task loads. Prior research has explored various solutions, including specialized keyboard layouts, tracked physical devices, and hands-free interaction. Yet, these efforts often fall short of replicating the efficiency of real-world text entry, or introduce additional spatial and device constraints. This study leverages the extensive capabilities of large language models (LLMs) in context perception and text prediction to enhance text entry efficiency by reducing users’ manual keystrokes. Three LLM-assisted text entry methods - Simplified Spelling, Content Prediction, and Keyword-to-Sentence Generation - are introduced, aligning with user cognition and the contextual predictability of English text at word, grammatical structure, and sentence levels. Through user experiments encompassing various text entry tasks on an Oculus-based VR prototype, these methods demonstrate a 16.4%, 49.9%, 43.7% reduction in manual keystrokes, translating to efficiency gains of 21.4%,74.0%, 76.3%, respectively. Importantly, these methods do not increase manual corrections compared to manual typing, while significantly reducing physical, mental, and temporal loads and enhancing overall usability. Long-term observations further reveal users’ strategies for using these LLM-assisted methods, showing that users’ proficiency with the methods can reinforce their positive effects on text entry efficiency.
更多
查看译文
关键词
Human-centered computing,Human computer interaction (HCI),Interaction paradigms,Virtual Reality,Interaction techniques,Text input
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要