“…The prediction follows, that neural responses also track visual speech signals. Indeed, neural responses entrain to speakers’ lip movements in various listening situations: clear audio-visual speech from a single speaker (Luo et al, 2010; Micheli et al, 2020; Mégevand et al, 2020), clear audio-visual speech from multiple speakers (Zion Golumbic et al, 2013; Park et al, 2016), silent visual-only speech (O’Sullivan et al, 2017; Hauswald et al, 2018; Bourguignon et al, 2020; Nidiffer et al, 2021), and audio-visual speech-in-noise (Keitel et al, 2018). Here, we investigated how visual and auditory speech signals are combined to support comprehension without segregation of speech and background noise.…”