“…While acoustic speech consists of temporal and spectral modulations of sound pressure, visual speech consists of movements of the mouth, head, and hands. Movements of the mouth, lips and tongue in particular provide both redundant and complementary information to acoustic cues ( Hall et al, 2005 ; Peelle and Sommers, 2015 ; Plass et al, 2019 ; Summerfield, 1992 ), and can help to enhance speech intelligibility in noisy environments and in a second language ( Navarra and Soto-Faraco, 2007 ; Sumby and Pollack, 1954 ; Yi et al, 2013 ). While a plethora of studies have investigated the cerebral mechanisms underlying speech in general, we still have a limited understanding of the networks specifically mediating visual speech perception, that is lip reading ( Bernstein and Liebenthal, 2014 ; Capek et al, 2008 ; Crosse et al, 2015 ).…”