Head and eye movement planning differ in access to information during visual search

bioRxiv (Cold Spring Harbor Laboratory)(2022)

引用 0|浏览2
暂无评分
摘要
Abstract To characterize the process of visual search, reaction time is measured relative to stimulus onset, when the whole search field is presented in view simultaneously. Salient objects are found faster, suggesting that they are detected using peripheral vision (rather than each object being fixated in turn). This work investigated how objects are detected in the periphery when onset in the visual field is due to head movement. Is the process of target detection similarly affected by salience? We test this in 360 degree view with free head and eye movement, using a virtual reality headset with eye tracking. We presented letters and Gabor patches as stimuli in separate experiments. Four clusters were arranged horizontally such that two clusters were visible at onset either side of a fixation cross (near location) while the other two entered the field of view (FoV) when the participant made an appropriate head movement (far location). In both experiments we varied whether the target was less or more salient. We found an interesting discrepancy in that across both tasks and locations the first eye movement to land near a cluster was closer to the salient target, even though salience did not lead to a faster head movement towards a cluster at the far locations. We also found that the planning of head movement changed the landing of gaze position to be targeted more towards the centres of the clusters at the far locations, leading to more accurate initial gaze positions relative to target, regardless of salience. This suggests that the spatial information available for targeting of eye movements within a given FoV is not always available for the planning of head movements and how a target appears in view affects gaze targeting accuracy.
更多
查看译文
关键词
eye movement planning,visual search,head
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要