Real-time gaze prediction in virtual reality.

ACM SIGMM Conference on Multimedia Systems (MMSys)(2022)

Cited 2|Views16
No score
Abstract
Gaze is an important indicator of visual attention and knowledge of gaze location can be used to improve and augment Virtual Reality (VR) experiences. This has led to the development of VR Head Mounted Displays (HMD) with inbuilt gaze trackers. Given the latency constraints of VR, foreknowledge of gaze, i.e., before it is reported by the gaze tracker, can similarly be leveraged to preemptively apply gaze-based improvements and augmentations to a VR experience, especially in distributed VR architectures. In this paper, we propose a light weight neural network based method utilizing only past HMD pose and gaze data to predict future gaze locations, forgoing computationally heavy saliency computation. Most work in this domain has focused on either 360°or ego-centric video or synthetic VR content with rather naive interaction dynamics like free viewing or supervised visual search tasks. Our solution considers data from the exhaustive OpenNEEDs dataset which contains 6 Degrees of Freedom (6DoF) data captured in VR experiences with subjects given the freedom to explore the VR scene and/or to engage in tasks. Our solution outperforms the very strict baseline: current gaze to predict gaze in real-time for sub 150ms prediction horizons for VR use-cases.
More
Translated text
Key words
prediction,real-time
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined