Lay Abstract
Background
Conversation requires integration of information from faces and voices to fully understand the speaker’s message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder (ASD) may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker.
Method
We showed a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants had to watch the video, without further instructions (implicit condition) or specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted
Participants
Individuals with and without high-functioning autism (HFA) aged 8–19.
Results
Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at the non-face regions of the image. There were no between-group differences for eye-directed gaze.
Conclusions
Individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is non-effective gaze behavior for this type of task.
Scientific Abstract
Background
Conversation requires integration of information from faces and voices to fully understand the speaker’s message. To detect auditory-visual asynchrony of speech listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder (ASD) may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker.
Method
We showed a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video, without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted.
Participants
Individuals with and without high-functioning autism (HFA) aged 8–19.
Results
Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit...