Traditional tactile cartography is complicated by problems associated with braille labeling and feature annotation. Audio-tactile display techniques can address many of these issues by associating spoken information and sounds with specific map elements. This article introduces Talking TMAP – a collaborative effort between The Smith-Kettlewell Eye Research Institute and Touch Graphics, Inc. Talking TMAP combines existing tools such as the World Wide Web, geographic information systems, braille embossers and touch tablet technology in new ways to produce a system capable of creating detailed and accurate audio-tactile street maps of any neighborhood. The article describes software design, user interface and plans for future implementation.
It remains controversial whether using two hands and multiple fingers provides any perceptual advantage over a single index finger. The present study examines this long-running question in the haptic-exploration literature by applying rigorous, psychophysical, and mathematical modeling techniques. We compared the performance of fourteen blindfolded sighted participants on seven tactile-map tasks using seven finger conditions. All tasks were benefited by multiple fingers, but it varied whether multiple fingers were beneficial on one hand, two hands, or both. Line-tracing tasks were performed faster when two hands were used, but not more than one finger per hand. Local and global search tasks were faster with multiple fingers, but not two hands. Distance comparison tasks were also performed faster with multiple fingers, and sometimes with two hands. Lastly, moving in a straight line was faster with multiple fingers, but was especially difficult with just two index fingers. These results provide empirical evidence that multiple hands and fingers benefit haptic perception, but the benefits are more complex than simply extending the tactile field of 'view'. This analogy between touch and vision fails to account for the autonomous movements and sensations of the fingers, which we show benefit the haptic perceptual system.
This article compares two methods of employing novice Web workers to author descriptions of science, technology, engineering, and mathematics images to make them accessible to individuals with visual and print-reading disabilities. The goal is to identify methods of creating image descriptions that are inexpensive, effective, and follow established accessibility guidelines. The first method explicitly presented the guidelines to the worker, then the worker constructed the image description in an empty text box and table. The second method queried the worker for image information and then used responses to construct a templatebased description according to established guidelines. The descriptions generated through queried image description (QID) were more likely to include information on the image category, title, caption, and units. They were also more similar to one another, based on Jaccard distances of q-grams, indicating that their word usage and structure were more standardized. Last, the workers preferred describing images using QID and found the task easier. Therefore, explicit instruction on image-description guidelines is not sufficient to produce quality image descriptions when using novice Web workers. Instead, it is better to provide information about images, then generate descriptions from responses using templates.
Blind and visually impaired mathematics students must rely on accessible materials such as tactile diagrams to learn mathematics. However, these compensatory materials are frequently found to offer students inferior opportunities for engaging in mathematical practice and do not allow sensorily heterogenous students to collaborate. Such prevailing problems of access and interaction are central concerns of Universal Design for Learning (UDL), an engineering paradigm for inclusive participation in cultural praxis like mathematics. Rather than directly adapt existing artifacts for broader usage, UDL process begins by interrogating the praxis these artifacts serve and then radically re-imagining tools and ecologies to optimize usability for all learners. We argue for the utility of two additional frameworks to enhance UDL efforts: (a) enactivism, a cognitive-sciences view of learning, knowing, and reasoning as modal activity; and (b) ethnomethodological conversation analysis (EMCA), which investigates participants' multimodal methods for coordinating action and meaning. Combined, these approaches help frame the design and evaluation of opportunities for heterogeneous students to learn mathematics collaboratively in inclusive classrooms by coordinating perceptuo-motor solutions to joint manipulation problems. We contextualize the thesis with a proposal for a pluralist design for proportions, in which a pair of students jointly operate an interactive technological device.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.