To abide by the tenants of universal design theory, the design of a product or service needs to not only consider the inclusion of as many potential users and uses as possible but also do so from conception. Control over the creation and adaption of the design should, therefore, fall under the purview of the original designer. Closed captioning has always been touted as an excellent example of an design or electronic curb-cut because it is a system designed for people who are deaf or hard of hearing, yet is used by many others for access to television in noisy environments such as gyms or pubs, or to learn a second language. Audio description is poised to have a similar image. In this paper, we will demonstrate how the processes and practices associated within closed captioning and audio description, in their current form, violate some of the main principles of universal design and are thus not such good examples of it. In addition, we will introduce an alternative process and set of practices through which directors of television, film and live events are able to take control of closed captioning and audio description by integrating them into the production process. In doing so, we will demonstrate that closed captioning and audio description are worthy of directorial attention and creative input rather than being tacked on at the very end of the process and usually to only meet regulatory or legislative mandates.
We present a model human cochlea (MHC), a sensory substitution technique and system that translates auditory information into vibrotactile stimuli using an ambient, tactile display. The model is used in the current study to translate music into discrete vibration signals displayed along the back of the body using a chair form factor. Voice coils facilitate the direct translation of auditory information onto the multiple discrete vibrotactile channels, which increases the potential to identify sections of the music that would otherwise be masked by the combined signal. One of the central goals of this work has been to improve accessibility to the emotional information expressed in music for users who are deaf or hard of hearing. To this end, we present our prototype of the MHC, two models of sensory substitution to support the translation of existing and new music, and some of the design challenges encountered throughout the development process. Results of a series of experiments conducted to assess the effectiveness of the MHC are discussed, followed by an overview of future directions for this research.
Five experiments investigated the ability to discriminate between musical timbres based on vibrotactile stimulation alone. Participants made same/different judgments on pairs of complex waveforms presented sequentially to the back through voice coils embedded in a conforming chair. Discrimination between cello, piano, and trombone tones matched for F0, duration, and magnitude was above chance with white noise masking the sound output of the voice coils (Experiment 1), with additional masking to control for bone-conducted sound (Experiment 2), and among a group of deaf individuals (Experiment 4a). Hearing (Experiment 3) and deaf individuals (Experiment 4b) also successfully discriminated between dull and bright timbres varying only with regard to spectral centroid. We propose that, as with auditory discrimination of musical timbre, vibrotactile discrimination may involve the cortical integration of filtered output from frequency-tuned mechanoreceptors functioning as critical bands.
People who are blind or have low vision have only recently begun to enjoy greater access to television and video through a new technology, called descriptive video information (DVI). Two styles of DVI production for animated comedy content were compared. The first model used a conventional description style, and the second used a first person narrative style. In addition, the first person narrative style was produced by the original animation creation team. Results from blind participants show that using the first person narrative style shows promise, especially since all participants seemed to have positive entertainment experiences with the first person narrative DVI version of the content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.