We present a model human cochlea (MHC), a sensory substitution technique and system that translates auditory information into vibrotactile stimuli using an ambient, tactile display. The model is used in the current study to translate music into discrete vibration signals displayed along the back of the body using a chair form factor. Voice coils facilitate the direct translation of auditory information onto the multiple discrete vibrotactile channels, which increases the potential to identify sections of the music that would otherwise be masked by the combined signal. One of the central goals of this work has been to improve accessibility to the emotional information expressed in music for users who are deaf or hard of hearing. To this end, we present our prototype of the MHC, two models of sensory substitution to support the translation of existing and new music, and some of the design challenges encountered throughout the development process. Results of a series of experiments conducted to assess the effectiveness of the MHC are discussed, followed by an overview of future directions for this research.
We present an experiment designed to reveal characteristics of a tactile display that presents vibrations representing music to the back of the body. Based on the model human cochlea, a sensory substitution system aimed at translating music into vibrations, we are investigating the use of larger contactor sizes (over 10mm in diameter) as an effective device for the detection of signals originating from music. Using the method of limits, we measured ability to discriminate the frequency of vibrotactile stimuli across a wide range of frequencies common to western classical harmonic music. Vibrotactile stimuli were presented to artificially deafened participants using a large contactor applied to the back. Between 65 Hz (C2) and 1047 Hz (C6), frequency difference limens (FDL) were consistently less than 1/3 of an octave and as small as 200 cents. These findings suggest that vibrotactile information can be used to support the experience of music even in the absence of sound, and that voice coils are effective in presenting some characteristics of sound as vibrations.
In this paper, we describe our investigation into user tolerance of recognition errors during hand gesture interactions with visual displays. The study is based on our proposed interaction model for investigating gesture based interactions, focusing on three elements: Interaction context, system performance and user goals. This Wizard of Oz experiment investigates how recognition system accuracy rates and task characteristics in both desktop and ubiquitous computing scenarios can influence user tolerance for gesture interactions. Results suggest that interaction context is a greater influence on user tolerance than system performance alone, where recognition error rates can potentially reach 40% before users will abandon gestures and use an alternate interaction mode in a ubiquitous computing scenario. Results also suggest that in a desktop scenario, traditional input methods are more appropriate than gestures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.