“…Prior studies have shown that access to such AV redundant speech cues, compared to auditory-only situations, can facilitate lexical access and, more largely, speech comprehension, most notably when the acoustic signal becomes difficult to understand due to noise [ 6 , 7 , 8 , 9 , 10 , 11 ] or to an unfamiliar accent or language (e.g., [ 12 , 13 , 14 ]). In such occasions, adult listeners have been shown to increase their visual attention (hereafter attention) to the talker’s mouth in order to maximize the processing of AV speech cues and enhance their processing of speech; for instance, when background acoustic noise increases [ 15 , 16 ], when volume is low [ 17 ], when their language proficiency is low [ 18 , 19 , 20 ], or when they are performing particularly challenging speech-processing tasks (e.g., speech segmentation [ 21 ] or sentences comparison [ 18 ]). On the other hand, when speech-processing demands are reduced, adults modulate their attention and focus more on the eyes of the talker [ 21 , 22 , 23 ], which can also support language understanding by constraining interpretations (e.g., of the speaker’s current emotion or current focus of attention).…”