Audemes at work: Investigating features of non-speech sounds to maximize content recognition

International Journal of Human-Computer Studies(2012)

Cited 27|Views0
No score
Abstract
To access interactive systems, blind users can leverage their auditory senses by using non-speech sounds. The structure of existing non-speech sounds, however, is geared toward conveying atomic operations at the user interface (e.g., opening a file) rather than evoking broader, theme-based content typical of educational material (e.g., an historical event). To address this problem, we investigate audemes, a new category of non-speech sounds whose semiotic structure and flexibility open new horizons for the aural interaction with content-rich applications. Three experiments with blind participants examined the attributes of an audeme that most facilitate the accurate recognition of their meaning. A sequential concatenation of different sound types (music, sound effect) yielded the highest meaning recognition, whereas an overlapping arrangement of sounds of the same type (music, music) yielded the lowest meaning recognition. We discuss seven guidelines to design well-formed audemes.
More
Translated text
Key words
Audeme,Acoustic,Blind,Visually impaired,Non-speech sound,Recognition
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined