谷歌浏览器插件
订阅小程序
在清言上使用

Reducing motion to photon latency in multi-focal augmented reality near-eye display

OPTICAL ARCHITECTURES FOR DISPLAYS AND SENSING IN AUGMENTED, VIRTUAL, AND MIXED REALITY (AR, VR, MR) II(2021)

引用 3|浏览5
暂无评分
摘要
It is foreseen that the most convenient hardware for depiction of augmented reality (AR) will be optical seethrough head-mounted displays. Currently such systems are utilizing single focal plane and are inflicting vergence-accommodation conflict to the human visual system - limiting wide acceptance. In this work, we analyze an optical seethrough AR head-mounted display prototype which has four focal planes operating in time-sequential mode thus mitigating limitation of single focal plane devices. Nevertheless, optical see-through nature implies requirement of very short motion-to-photon latency not to cause noticeable misalignment between the digital content and real-world scene. The utilized prototype display relies on commercial visual-SLAM spatial tracking module (Intel realsense T265) and within this work we analyzed factors improving motion-to-photon latency with the provided hardware setup. The performance analysis of the T265 module revealed slight translational and angular jitter - on the order of <1 mm and <15 arcseconds, and velocity readout of few cm/s from a completely still IMU. The experimentally determined motion-to-photon latency and render-to-photon latency was 46 +/- 6 ms and 38 ms respectively. To overcome IMU positional jitter, pose averaging with variable width of the averaging window was implemented. Based on immediate acceleration and velocity data the size of the averaging window was adjusted. To perform pose prediction a basic rotational-axis offset model was verified. Based on prerecorded head movements, a training model reduced the error between the predicted and actual recorded pose. The optimization parameters were corresponding offset values of the IMU's rotational axis, translational and angular velocity as well as angular acceleration. As expected, the highest weight for the most accurate predictions was observed for velocities following angular acceleration. The role of offset values wasn't significant. For improved perceived experience and motion-to-photon latency reduction we consider further investigation of simple trained neural networks for more accurate real-time pose prediction as well as investigation of content-driven adaptive image output overriding default order of image plane output in a time-sequential sequence.
更多
查看译文
关键词
augmented reality,motion-to-photon latency,multi-focal display,head-mounted display,time-sequential
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要