Pilot Study on Interaction with Wide Area Motion Imagery Comparing Gaze Input and Mouse Input.

HCI (5)(2023)

Cited 0|Views1
No score
Abstract
Recent sensor development allows capturing Wide Area Motion Imagery (WAMI) covering several square kilometers including a vast number of tiny moving vehicles and persons. In this situation, human interactive image exploitation is exhaustive and requires support by automated image exploitation like multi-object tracking (MOT). MOT provides object detections supporting finding small moving objects; moreover, MOT provides object tracks supporting if an object has to be identified because of its moving behavior. As WAMI and MOT are current research topics, we aim to get first insight in interaction with both. We introduce an experimental system comprising typical system functions for image exploitation and for interaction with object detections and object tracks. The system provides two input concepts. One utilizes a computer mouse and a keyboard for system input. The other utilizes a remote eye-tracker and a keyboard; as in prior work, gaze-based selection of moving objects in Full Motion Video (FMV) appeared as an efficient and manually less stressful input alternative to mouse input. We introduce five task types that might occur in practical visual WAMI exploitation. In a pilot study (N = 12; all non-expert image analysts), we compare gaze input and mouse input for those five task types. The results show, that both input concepts allow similar user performance concerning error rates, completion time, and perceived workload (NASA-TLX). Most features of user satisfaction (ISO 9241-411 questionnaire) were rated similar as well, except general comfort being better for gaze input and eye fatigue being better for mouse input.
More
Translated text
Key words
gaze input,wide area motion imagery,mouse,interaction
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined