Recent developments pertaining to ear-mounted wearable computer interfaces (i.e., "hearables") offer a number of distinct affordances over other wearable devices in ambient and ubiquitous computing systems. This paper provides a survey of hearables and the possibilities that they offer as computer interfaces. Thereafter, these affordances are examined with respect to other wearable interfaces. Finally, several historical trends are noted within this domain, and multiple paths for future development are offered.
Image-guided neurosurgery, or neuronavigation, has been used to visualise the location of a surgical probe by mapping the probe location to pre-operative models of a patient's anatomy. One common limitation of this approach is that it requires the surgeon to divert their attention away from the patient and towards the neuronavigation system. In order to improve this type of application, the authors designed a system that sonifies (i.e. provides audible feedback of) distance information between a surgical probe and the location of the anatomy of interest. A user study (n = 15) was completed to determine the utility of sonified distance information within an existing neuronavigation platform (Intraoperative Brain Imaging System (IBIS) Neuronav). The authors’ results were consistent with the idea that combining auditory distance cues with existing visual information from image-guided surgery systems may result in greater accuracy when locating specified points on a pre-operative scan, thereby potentially reducing the extent of the required surgical openings, as well as potentially increasing the precision of individual surgical tasks. Further, the authors’ results were also consistent with the hypothesis that combining auditory and visual information reduces the perceived difficulty in locating a target location within a three-dimensional volume.
Recent research suggests that the perception of sound-source size may be based in part on attributes of timbre, and further, that it may play a role in understanding emotional responses to music. Here, we report 2 perceptual studies in which the TANDEM-STRAIGHT vocoder was used to modify musical instrument tones to emulate equivalent instruments of different sizes. In each experiment, listeners heard sequential tone pairs in which tones from the same instrument were manipulated to sound as if they had originated from a larger or smaller source. Manipulations included modifications of both fundamental frequency (f0) and spectral envelope. Participants estimated the direction and magnitude of these size changes. We collected data both with and without RMS normalization of the output of the TANDEM-STRAIGHT vocoder in Experiments 1 and 2, respectively. In both cases, manipulations of f0 and spectral envelope were found to have significant effects on the perception of sound-source size change, although results varied across musical instruments and depended on whether the sounds were equalized in level. The results uncover several important considerations for understanding musical timbre and pitch, and are discussed in light of implications for the perception of musical affect.
The human auditory system can rapidly process musical information such as, for example: the recognition and identification of sound sources, the deciphering of meter, tempo, mode, and texture, the processing of lyrics and dynamics, the identification of musical style and genre, the perception of performance nuance, and the apprehension of emotional character. Two empirical studies are reported that attempt to chronicle when such information is processed. In the first exploratory study, a diverse set of musical excerpts was selected and trimmed to various durations, ranging from 50 ms to 3000 ms. These samples, beginning with the shortest and ending with the longest, were presented to participants, who were then asked to free associate and talk about any observations that came to mind. Based on these results, a second main study was carried out using a betting paradigm to determine the amount of exposure needed for listeners to feel confident about acquired musical information. The results suggest a rapid unfolding of cognitive processes within a 3-second listening span.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.