Evidence of visual-auditory cross-modal plasticity in deaf individuals has been widely reported. Superior visual abilities of deaf individuals have been shown to result in enhanced reactivity to visual events and/or enhanced peripheral spatial attention. The goal of this study was to investigate the association between visual-auditory cross-modal plasticity and speech perception in post-lingually deafened, adult cochlear implant (CI) users. Post-lingually deafened adults with CIs (N = 14) and a group of normal hearing, adult controls (N = 12) participated in this study. The CI participants were divided into a good performer group (good CI, N = 7) and a poor performer group (poor CI, N = 7) based on word recognition scores. Visual evoked potentials (VEP) were recorded from the temporal and occipital cortex to assess reactivity. Visual field (VF) testing was used to assess spatial attention and Goldmann perimetry measures were analyzed to identify differences across groups in the VF. The association of the amplitude of the P1 VEP response over the right temporal or occipital cortex among three groups (control, good CI, poor CI) was analyzed. In addition, the association between VF by different stimuli and word perception score was evaluated. The P1 VEP amplitude recorded from the right temporal cortex was larger in the group of poorly performing CI users than the group of good performers. The P1 amplitude recorded from electrodes near the occipital cortex was smaller for the poor performing group. P1 VEP amplitude in right temporal lobe was negatively correlated with speech perception outcomes for the CI participants (r = -0.736, P = 0.003). However, P1 VEP amplitude measures recorded from near the occipital cortex had a positive correlation with speech perception outcome in the CI participants (r = 0.775, P = 0.001). In VF analysis, CI users showed narrowed central VF (VF to low intensity stimuli). However, their far peripheral VF (VF to high intensity stimuli) was not different from the controls. In addition, the extent of their central VF was positively correlated with speech perception outcome (r = 0.669, P = 0.009). Persistent visual activation in right temporal cortex even after CI causes negative effect on outcome in post-lingual deaf adults. We interpret these results to suggest that insufficient intra-modal (visual) compensation by the occipital cortex may cause negative effects on outcome. Based on our results, it appears that a narrowed central VF could help identify CI users with poor outcomes with their device.
Background and Objectives The present study aims to investigate whether the cochlear implant electrode array design affects the electrophysiological and psychophysical measures. Subjects and Methods Eighty five ears were used as data in this retrospective study. They were divided into two groups by the electrode array design: lateral wall type (LW) and perimodiolar type (PM). The electrode site was divided into three regions (basal, medial, apical). The evoked compound action potential (ECAP) threshold, T level, C level, dynamic range (DR), and aided air conduction threshold were measured. Results The ECAP threshold was lower for the PM than for the LW, and decreased as the electrode site was closer to the apical region. The T level was lower for the PM than for the LW, and was lower on the apical region than on the other regions. The C level on the basal region was lower for the PM than for the LW whereas the C level was lower on the apical region than on the other regions. The DRs on the apical region was greater for the PM than for the LW whereas the DR was narrower on the apical region than on the other regions. The aided air conduction threshold was not different for the electrode design and frequency. Conclusions The current study would support the advantages of the PM over the LW in that the PM had the lower current level and greater DR, which could result in more localized neural stimulation and reduced power consumption.
Background and ObjectivesPeople usually converse in real-life background noise. They experience more difficulty understanding speech in noise than in a quiet environment. The present study investigated how speech recognition in real-life background noise is affected by the type of noise, signal-to-noise ratio (SNR), and age.Subjects and MethodsEighteen young adults and fifteen middle-aged adults with normal hearing participated in the present study. Three types of noise [subway noise, vacuum noise, and multi-talker babble (MTB)] were presented via a loudspeaker at three SNRs of 5 dB, 0 dB, and -5 dB. Speech recognition was analyzed using the word recognition score.Results1) Speech recognition in subway noise was the greatest in comparison to vacuum noise and MTB, 2) at the SNR of -5 dB, speech recognition was greater in subway noise than vacuum noise and in vacuum noise than MTB while at the SNRs of 0 and 5 dB, it was greater in subway noise than both vacuum noise and MTB and there was no difference between vacuum noise and MTB, 3) speech recognition decreased as the SNR decreased, and 4) young adults showed better speech recognition performance in all types of noises at all SNRs than middle-aged adults.ConclusionsSpeech recognition in real-life background noise was affected by the type of noise, SNR, and age. The results suggest that the frequency distribution, amplitude fluctuation, informational masking, and cognition may be important underlying factors determining speech recognition performance in noise.
The purpose was to assess if phonemic categorization in sentential context is best explained by autonomous feedforward processing or by top-down feedback processing that affects phonemic representation. 11 listeners with normal hearing, ages 20-50 years, were asked to label consonants in /pi/-/ti/ consonant-vowel (CV) stimuli in 9-step continua. One continuum was derived from natural tokens and the other was synthetically generated. The CV stimuli were presented in isolation and in three sentential contexts: a neutral context, a context favoring /p/, and a context favoring /t/. For both natural and synthetic stimuli, the isolated and neutral context sentences yielded significantly more /t/ responses than sentence contexts primed for either /p/ or /t/. No other conditions were significantly different. Results did not show easily explainable semantic context effects. Instead, data clustering was more readily explained by top-down feedback processing affecting phonemic representation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.