“…Another paradigm that enables examining perceptual mechanisms associated with short-term audio-visual learning is phonetic recalibration (also “perceptual learning,” Samuel and Kraljic, 2009 ; Vroomen and Baart., 2012 ). Recalibration refers to a shift in an individual’s perception of ambiguous speech induced by the presentation of disambiguating visual input, such as lip-read speech ( Bertelson et al, 2003 ; Vroomen and Baart., 2012 ), spoken word context ( Norris et al, 2003 ), overt speech articulation ( Scott, 2016 ), or text ( Bonte et al, 2017 ; Keetels et al, 2018 ; Romanovska et al, 2019 ). In the classical paradigm, an ambiguous speech sound, e.g., /a?a/ midway between /aba/ and /ada/ is combined with a disambiguating video of a speaker articulating “aba” or “ada.” The subsequent perception of the ambiguous speech sound in auditory-only trials is temporarily biased in the direction of the video – that is, it will be perceived as /aba/ following an “aba” video and as /ada/ following an “ada” video.…”