Speech comprehension difficulties are ubiquitous to aging and hearing loss, particularly in noisy environments. Older adults' poorer speech-in-noise (SIN) comprehension has been related to abnormal neural representations within various nodes (regions) of the speech network, but how senescent changes in hearing alter the transmission of brain signals remains unspecified. We measured electroencephalograms in older adults with and without mild hearing loss during a SIN identification task. Using functional connectivity and graph-theoretic analyses, we show that hearing-impaired (HI) listeners have more extended (less integrated) communication pathways and less efficient information exchange among widespread brain regions (larger network eccentricity) than their normal-hearing (NH) peers. Parameter optimized support vector machine classifiers applied to EEG connectivity data showed hearing status could be decoded (> 85% accuracy) solely using network-level descriptions of brain activity, but classification was particularly robust using left hemisphere connections. Notably, we found a reversal in directed neural signaling in left hemisphere dependent on hearing status among specific connections within the dorsalventral speech pathways. NH listeners showed an overall net "bottom-up" signaling directed from auditory cortex (A1) to inferior frontal gyrus (IFG; Broca's area), whereas the HI group showed the reverse signal (i.e., "top-down" Broca's → A1). A similar flow reversal was noted between
Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e., differentiates phonetic prototypes from ambiguous speech sounds). We recorded high density EEGs as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine (SVM) classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event related potentials (ERPs). We found that early (120 ms) whole-brain data decoded speech categories (i.e., prototypical vs. ambiguous speech tokens) with 95.16% accuracy [area under the curve (AUC) 95.14%; F1-score 95.00%]. Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more robust and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions (including auditory cortex, supramarginal gyrus, and Brocas area) that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, Brocas area, motor cortex) were necessary to describe later decision stages (later 300 ms) of categorization but these areas were highly associated with the strength of listeners categorical hearing (i.e., slope of behavioral identification functions). Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (~120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Decoding Hearing Loss via EEG (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.