This study used eye-tracking methodology to assess audiovisual (AV) speech perception in 26 children ranging in age from 5-15 years, half with autism spectrum disorders (ASD) and half with typical development (TD). Given the characteristic reduction in gaze to the faces of others in children with ASD, it was hypothesized that they would show reduced influence of visual information on heard speech. Responses were compared on a set of auditory, visual and audiovisual speech perception tasks. Even when fixated on the face of the speaker, children with ASD were less visually influenced than TD controls. This indicates fundamental differences in the processing of AV speech in children with ASD, which may contribute to their language and communication impairments.
We explored the variation in the resistance that lingual and non lingual consonants exhibit to coarticulation by following vowels in the schwa+CV disyllables of two native speakers of English. Generally, lingual consonants other than /g/ were more resistant tho coarticulation than thhe liabial consonants /b/ and /v/. Coarticulation resistance in the consonant also affected articulatory evidence for trans consonantal vowel-to-vowel coarticulation, but did not show consistent acoustic effects. As for effects of coarticulation resistance in thhe following vowel, articulatory and acoustic effects were quite liarge at consonantre lease but much weaker farther into the following stressed vowel. Correlations between coarticulation resistance effects at consonantrelease and liocus equation slopes were highly significant, consistent with the view that variation in coarticulation resistance explains differences among consonants in liocus equation slopes.
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.