Auditory attention decoding (AAD) through a brain-computer interface has had a flowering of developments since it was first introduced by Mesgarani and Chang (2012) using electrocorticograph recordings. AAD has been pursued for its potential application to hearing-aid design in which an attention-guided algorithm selects, from multiple competing acoustic sources, which should be enhanced for the listener and which should be suppressed. Traditionally, researchers have separated the AAD problem into two stages: reconstruction of a representation of the attended audio from neural signals, followed by determining the similarity between the candidate audio streams and the reconstruction. Here, we compare the traditional two-stage approach with a novel neural-network architecture that subsumes the explicit similarity step. We compare this new architecture against linear and non-linear (neural-network) baselines using both wet and dry electroencephalogram (EEG) systems. Our results indicate that the new architecture outperforms the baseline linear stimulus-reconstruction method, improving decoding accuracy from 66% to 81% using wet EEG and from 59% to 87% for dry EEG. Also of note was the finding that the dry EEG system can deliver comparable or even better results than the wet, despite the latter having one third as many EEG channels as the former. The 11-subject, wet-electrode AAD dataset for two competing, co-located talkers, the 11-subject, dry-electrode AAD dataset, and our software are available for further validation, experimentation, and modification.
Neural representation of pitch-relevant information at both the brainstem and cortical levels of processing is influenced by language or music experience. However, the functional roles of brainstem and cortical neural mechanisms in the hierarchical network for language processing, and how they drive and maintain experience-dependent reorganization are not known. In an effort to evaluate the possible interplay between these two levels of pitch processing, we introduce a novel electrophysiological approach to evaluate pitch-relevant neural activity at the brainstem and auditory cortex concurrently. Brainstem frequency-following responses and cortical pitch responses were recorded from participants in response to iterated rippled noise stimuli that varied in stimulus periodicity (pitch salience). A control condition using iterated rippled noise devoid of pitch was employed to ensure pitch specificity of the cortical pitch response. Neural data were compared with behavioral pitch discrimination thresholds. Results showed that magnitudes of neural responses increase systematically and that behavioral pitch discrimination improves with increasing stimulus periodicity, indicating more robust encoding for salient pitch. Absence of cortical pitch response in the control condition confirms that the cortical pitch response is specific to pitch. Behavioral pitch discrimination was better predicted by brainstem and cortical responses together as compared to each separately. The close correspondence between neural and behavioral data suggest that neural correlates of pitch salience that emerge in early, preattentive stages of processing in the brainstem may drive and maintain with high fidelity the early cortical representations of pitch. These neural representations together contain adequate information for the development of perceptual pitch salience.
When exposed to continuous high-level noise, cochlear neurons are more susceptible to damage than hair cells (HCs): exposures causing temporary threshold shifts (TTS) without permanent HC damage can destroy ribbon synapses, permanently silencing the cochlear neurons they formerly activated. While this “hidden hearing loss” has little effect on thresholds in quiet, the neural degeneration degrades hearing in noise and may be an important elicitor of tinnitus. Similar sensory pathologies are seen after blast injury, even if permanent threshold shift (PTS) is minimal. We hypothesized that, as for continuous-noise, blasts causing only TTS can also produce cochlear synaptopathy with minimal HC loss. To test this, we customized a shock tube design to generate explosive-like impulses, exposed anesthetized chinchillas to blasts with peak pressures from 160–175 dB SPL, and examined the resultant cochlear dysfunction and histopathology. We found exposures that cause large >40 dB TTS with minimal PTS or HC loss often cause synapse loss of 20–45%. While synaptopathic continuous-noise exposures can affect large areas of the cochlea, blast-induced synaptopathy was more focal, with localized damage foci in midcochlear and basal regions. These results clarify the pathology underlying blast-induced sensory dysfunction, and suggest possible links between blast injury, hidden hearing loss, and tinnitus.
Experience-dependent enhancement of neural encoding of pitch in the auditory brainstem has been observed for only specific portions of native pitch contours exhibiting high rates of pitch acceleration, irrespective of speech or nonspeech contexts. This experiment allows us to determine whether this language-dependent advantage transfers to acceleration rates that extend beyond the pitch range of natural speech. Brainstem frequency following responses (FFRs) were recorded from Chinese and English participants in response to four, 250-ms dynamic click-train stimuli with different rates of pitch acceleration. The maximum pitch acceleration rates in a given stimulus ranged from low (0.3 Hz/ms; Mandarin Tone 2) to high (2.7 Hz/ms; 2 octaves). Pitch strength measurements were computed from the FFRs using autocorrelation algorithms with an analysis window centered at the point of maximum pitch acceleration in each stimulus. Between-group comparisons of pitch strength revealed that Chinese exhibit more robust pitch representation than English across all four acceleration rates. Regardless of language group, pitch strength was greater in response to acceleration rates within or proximal to natural speech relative to those beyond its range. Though both groups showed decreasing pitch strength with increasing acceleration rates, pitch representations of the Chinese group were more resistant to degradation. FFR spectral data were complementary across acceleration rates. These findings demonstrate that perceptually salient pitch cues associated with lexical tone influence brainstem pitch extraction not only in the speech domain, but also in auditory signals that clearly fall outside the range of dynamic pitch that a native listener is exposed to.
The medial olivocochlear reflex (MOCR) has been hypothesized to provide benefit for listening in noisy environments. This advantage can be attributed to a feedback mechanism that suppresses auditory nerve (AN) firing in continuous background noise, resulting in increased sensitivity to a tone or speech. MOC neurons synapse on outer hair cells (OHCs), and their activity effectively reduces cochlear gain. The computational model developed in this study implements the time-varying, characteristic frequency (CF) and leveldependent effects of the MOCR within the framework of a well-established model for normal and hearingimpaired AN responses. A second-order linear system was used to model the time-course of the MOCR using physiological data in humans. The stimulus-leveldependent parameters of the efferent pathway were estimated by fitting AN sensitivity derived from responses in decerebrate cats using a tone-in-noise paradigm. The resulting model uses a binaural, timevarying, CF-dependent, level-dependent OHC gain reduction for both ipsilateral and contralateral stimuli that improves detection of a tone in noise, similarly to recorded AN responses. The MOCR may be important for speech recognition in continuous background noise as well as for protection from acoustic trauma. Further study of this model and its efferent feedback loop may improve our understanding of the effects of sensorineural hearing loss in noisy situations, a condition in which hearing aids currently struggle to restore normal speech perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.