Objectives:
To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation. Therefore, we hypothesized that audiovisual benefit would be greater for low-pass filtered words than high-pass filtered speech. We assessed whether this pattern of results would translate to sentence recognition.
Design:
Children with typical hearing completed auditory-only and audiovisual tests of consonant–vowel–consonant word and sentence recognition across conditions differing in acoustic frequency content: a low-pass filtered condition in which children could only access acoustic content below 2 kHz and a high-pass filtered condition in which children could only access acoustic content above 2 kHz. They also completed a visual-only test of consonant–vowel–consonant word recognition. We analyzed word, consonant, and keyword-in-sentence recognition and consonant feature (place, voice/manner of articulation) transmission accuracy across modalities and filter conditions using binomial general linear mixed models. To assess the degree to which visual speech is complementary versus redundant with acoustic speech, we calculated the proportion of auditory-only target and response consonant pairs that we can tell apart using only visual speech and compared these values between high-pass and low-pass filter conditions.
Results:
In auditory-only conditions, recognition accuracy was lower for low-pass filtered consonants and consonant features than high-pass filtered consonants and consonant features, especially consonant place of articulation. In visual-only conditions, recognition accuracy was greater for consonant place of articulation than consonant voice/manner of articulation. In addition, auditory consonants in the low-pass filtered condition were more likely to be substituted for visually distinct consonants, meaning that there was more opportunity to use visual cues to supplement missing auditory information in the low-pass filtered condition. Audiovisual benefit for isolated whole words was greater for low-pass filtered speech than high-pass filtered speech. No difference in audiovisual benefit between filter conditions was observed for phonemes, features, or words-in-sentences. Ceiling effects limit the interpretation of these nonsignificant interactions.
Conclusions:
For isolated word recognition, visual speech is more complementary with the acoustic speech cues children can access when high-frequency acoustic content is eliminated by low-pass filtering than when low-frequency acoustic content is eliminated by high-pass filtering. This decreased auditory-visual phonetic redundancy is accompanied by larger audiovisual benefit. In contrast, audiovisual benefit for sentence recognition did not differ between low-pass and high-pass filtered speech. This might reflect ceiling effects in audiovisual conditions or a decrease in the contribution of auditory-visual phonetic redundancy to explaining audiovisual benefit for connected speech. These results from children with typical hearing suggest that some variance in audiovisual benefit among children who are hard of hearing may depend in part on frequency-specific audibility.