Understanding and Improving Information Extraction From Online Geospatial Data Visualizations for Screen-Reader Users

International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS)(2022)

Cited 3|Views37
No score
Abstract
Prior work has studied the interaction experiences of screen-reader users with simple online data visualizations (e.g., bar charts, line graphs, scatter plots), highlighting the disenfranchisement of screen-reader users in accessing information from these visualizations. However, the interactions of screen-reader users with online geospatial data visualizations, commonly used by visualization creators to represent geospatial data (e.g., COVID-19 cases per US state), remain unexplored. In this work, we study the interactions of and information extraction by screen-reader users from online geospatial data visualizations. Specifically, we conducted a user study with 12 screen-reader users to understand the information they seek from online geospatial data visualizations and the questions they ask to extract that information. We utilized our findings to generate a taxonomy of information sought from our participants’ interactions. Additionally, we extended the functionalities of VoxLens—an open-source multi-modal solution that improves data visualization accessibility—to enable screen-reader users to extract information from online geospatial data visualizations.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined