“…One area of particular interest to multisensory (MS) researchers is speech recognition, where it has long been known that visual articulatory cues can strongly influence auditory speech perception (McGurk & MacDonald, 1976; Saint-Amour et al, 2007; Tjan et al, 2014). This is especially true when the auditory speech signal is ambiguous, as is often the case when the background environment is noisy or there are multiple simultaneous speakers (Benoit et al, 1994; Foxe et al, 2020; Foxe et al, 2015; Ma et al, 2009; MacLeod & Summerfield, 1987; Molholm et al, 2020; Richie & Kewley-Port, 2008; Ross et al, 2011; Ross, Saint-Amour, Leavitt, Javitt, et al, 2007; Senkowski et al, 2008; Sumby, 1954). Indeed, despite the fact that most of us are generally poor lip readers when only the visual signal is available, the enhancing effects of visual speech can be dramatic, such that visual inputs can render completely indecipherable vocalizations clearly audible.…”