Factors leading to variability in auditory-visual (AV) speech recognition include the subject's ability to extract auditory (A) and visual (V) signal-related cues, the integration of A and V cues, and the use of phonological, syntactic, and semantic context. In this study, measures of A, V, and AV recognition of medial consonants in isolated nonsense syllables and of words in sentences were obtained in a group of 29 hearing-impaired subjects. The test materials were presented in a background of speech-shaped noise at 0-dB signal-to-noise ratio. Most subjects achieved substantial AV benefit for both sets of materials relative to A-alone recognition performance. However, there was considerable variability in AV speech recognition both in terms of the overall recognition score achieved and in the amount of audiovisual gain. To account for this variability, consonant confusions were analyzed in terms of phonetic features to determine the degree of redundancy between A and V sources of information. In addition, a measure of integration ability was derived for each subject using recently developed models of AV integration. The results indicated that (1) AV feature reception was determined primarily by visual place cues and auditory voicing + manner cues, (2) the ability to integrate A and V consonant cues varied significantly across subjects, with better integrators achieving more AV benefit, and (3) significant intra-modality correlations were found between consonant measures and sentence measures, with AV consonant scores accounting for approximately 54% of the variability observed for AV sentence recognition. Integration modeling results suggested that speechreading and AV integration training could be useful for some individuals, potentially providing as much as 26% improvement in AV consonant recognition.
Seventeen hearing-impaired adults were fit with omnidirectional/directional hearing aids, which they wore during a four-week trial. For each listening situation encountered in daily living during a total of seven days, participants selected the preferred microphone mode and described the listening situation in terms of five environmental variables, using a paper and pencil form. Results indicated that hearing-impaired adults typically spend the majority of their active listening time in situations with background noise present and surrounding the listener, and the signal source located in front and relatively near. Microphone preferences were fairly evenly distributed across listening situations but differed depending on the characteristics of the listening environment. The omnidirectional mode tended to be preferred in relatively quiet listening situations or, in the presence of background noise, when the signal source was relatively far away. The directional mode tended to be preferred when background noise was present and the signal source was located in front of and relatively near the listener. Results suggest that knowing only signal location and distance and whether background noise is present or absent, omnidirectional/directional hearing aids can be set in the preferred mode in most everyday listening situations. These findings have relevance for counseling patients when to set manually switchable omnidirectional/directional hearing aids in each microphone mode, as well as for the development of automatic algorithms for selecting omnidirectional versus directional microphone processing.
Visual recognition of consonants was studied in 31 hearing-impaired adults before and after 14 hours of concentrated, individualized, speechreading training. Confusions were analyzed via a hierarchical clustering technique to derive categories of visual contrast among the consonants. Pretraining and posttraining results were compared to reveal the effects of the training program. Training caused an increase in the number of visemes consistently recognized and an increase in the percentage of within-viseme responses. Analysis of the responses made revealed that most changes in consonant recognition occurred during the first few hours of training.
This study compared unilateral and bilateral aided speech recognition in background noise in 28 patients being fitted with amplification. Aided QuickSIN (Quick Speech-in-Noise test) scores were obtained for bilateral amplification and for unilateral amplification in each ear. In addition, right-ear directed and left-ear directed recall on the Dichotic Digits Test (DDT) was obtained from each participant. Results revealed that the vast majority of patients obtained better speech recognition in background noise on the QuickSIN from unilateral amplification than from bilateral amplification. There was a greater tendency for bilateral amplification to have a deleterious effect among older patients. Most frequently, better aided QuickSIN performance was obtained in the right ear of participants, despite similar hearing thresholds in both ears. Finally, patients tended to perform better on the DDT in the ear that provided less SNR loss on the QuickSIN. Results suggest that bilateral amplification may not always be beneficial in every daily listening environment when background noise is present, and it may be advisable for patients wearing bilateral amplification to remove one hearing aid when difficulty is encountered understanding speech in background noise.
Persons with impaired hearing who are candidates for amplification are not all equally successful with hearing aids in daily living. Having the ability to predict success with amplification in everyday life from measures that can be obtained during an initial evaluation of the patient's candidacy would result in greater patient satisfaction with hearing aids and more efficient use of clinical resources. This study investigated the relationship between various demographic and audiometric measures, and two measures of hearing aid success in 50 hearing aid wearers. Audiometric predictors included measures of audibility and suprathreshold distortion. The unaided and aided signal-to-noise ratio (SNR) loss on the QuickSIN test provided the best predictors of hearing aid success in daily living. However, much of this predictive relationship appeared attributable to the patient's age.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.