“…As noise decreases the available acoustic information in the speech signal, it might be more difficult for non‐native listeners to make a phonological mapping between the speech signal and perceptual/linguistic representations, as these might have not been fully tuned to the non‐native language (Flege, ; Iverson et al, ; Lecumberri et al, ). Specifically in such situations, visual phonological information that is conveyed by visible speech has been shown to enhance non‐native language learning and comprehension (Hannah et al, ; Jongman, Wang, & Kim, ; Kawase, Hannah, & Wang, ; Kim, Sonic, & Davis, ; Wang, Behne, & Jiang, ). In native listeners, it has been suggested that visual attention is more often directed to the mouth of a talker to extract more information from visible speech when speech is degraded (Buchan, Paré, & Munhall, ; Król, ; Munhall, ; Rennig, Wegner‐Clemens, & Beauchamp, ).…”