This paper presents our work on the development of a multimodal auditory interface which permits blind users to work more easily and efficiently with GUI browsers. A macro-analysis phase, which can be either passive or active, informs on the global layout of HTML documents. A subsequent active micro-analysis phase allows to explore particular elements of the document. The interface is based on : (1) a mapping of the graphical HTML document into a 3D virtual sound space environment, where non-speech auditory cues differentiate HTML elements; (2) the transcription into sound not only of text, but also of images; (3) the use of a touch-sensitive screen to facilitate user interaction. Moreover, in order to validate the sonification model of the images, we have created an audio "memory game", that can be used as a pedagogical tool to help blind pupils learn spatial exploration cues.
This paper presents our work on the development of a multimodal auditory interface which permits blind users to work more easily and efficiently with GUI browsers. A macro-analysis phase, which can be either passive or active, informs on the global layout of HTML documents. A subsequent active micro-analysis phase allows to explore particular elements of the document. The interface is based on : (1) a mapping of the graphical HTML document into a 3D virtual sound space environment, where non-speech auditory cues differentiate HTML elements; (2) the transcription into sound not only of text, but also of images; (3) the use of a touch-sensitive screen to facilitate user interaction. Moreover, in order to validate the sonification model of the images, we have created an audio "memory game", that can be used as a pedagogical tool to help blind pupils learn spatial exploration cues.
The Internet now permits easy access to textual and pictorial material from an exponentially growing number of sources. The widespread use of graphical user interfaces, however, increasingly bars visually handicapped people from using such material. In this context, our project aims at providing sight handicapped people with alternative access modalities to pictorial documents. More precisely, our goal is to develop an augmented Internet browser to facilitate blind users access to the World Wide Web. The main distinguishing characteristics of this browser are : (1) generation of a virtual sound space into which the screen information is mapped; (2) transcription into sounds not only of text, but also of images; (3) active user interaction, both for the macro-analysis and micro-analysis of screen objects of interest; (4) use of a touch-sensitive screen to facilitate user interaction. Several prototypes have been implemented, and are being evaluated by blind users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.