ReMap: Multimodal Help-Seeking

The Adjunct Publication of the 32nd Annual ACM Symposium on User Interface Software and Technology(2019)

Cited 2|Views93
No score
Abstract
ReMap is a multimodal interface that enables searching for learning videos using speech and in-task pointing. ReMap extends multimodal interaction to help-seeking for complex tasks. Users can speak search queries, adding app-specific terms deictically. Users can navigate ReMap's search results via speech or mouse. These features allow people to stay focused on their task while simultaneously searching for and using help resources. Future work should explore how to implement more robust deictic resolution and more modalities.
More
Translated text
Key words
contextual search, deixis, multimodal interaction, speech
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined