Digital Forms for All: A Holistic Multimodal Large Language Model Agent for Health Data Entry.

Andrea Cuadra, Justine Breuch, Samantha Estrada, David Ihim, Isabelle Hung, Derek Askaryar, Marwan Hassanien, Kristen L. Fessele,James A. Landay

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.(2024)

Cited 0|Views5
No score
Abstract
Digital forms help us access services and opportunities, but they are not equally accessible to everyone, such as older adults or those with sensory impairments. Large language models (LLMs) and multimodal interfaces offer a unique opportunity to increase form accessibility. Informed by prior literature and needfinding, we built a holistic multimodal LLM agent for health data entry. We describe the process of designing and building our system, and the results of a study with older adults (N =10). All participants, regardless of age or disability status, were able to complete a standard 47-question form independently using our system---one blind participant said it was "a prayer answered." Our video analysis revealed how different modalities provided alternative interaction paths in complementary ways (e.g., the buttons helped resolve transcription errors and speech helped provide more options when the pre-canned answer choices were insufficient). We highlight key design guidelines, such as designing systems that dynamically adapt to individual needs.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined