“…Whenever the other sensory systems (e.g., vision) provide reliable spatial information about auditory events, the brain exploits these additional sensory sources to calibrate and optimize internal models for spatial hearing (Keating & King, 2015). This notion emerged, for instance, from studies that examined re-learning of sound-space correspondences when auditory cues were temporarily altered using monaural ear-plugs (Rabini et al, 2019;Strelninkov et al, 2011;Trapeau & Schönwiesner, 2015), ear molds (Van Wanrooij & Van Opstal, 2005) or non-individualised HRTFs (Head-Related Transfer Functions; Honda, Shibata, Gyoba, Saitou, Iwaya & Suzuki, 2007;Parseihian & Katz, 2012;Steadman, Kim, Lestang, Goodman & Picinali, 2019). In these simulated altered-listening conditions, multisensory training procedures proved effective for re-learning sound-space correspondences (for reviews see: Carlile, 2014;Keating & King, 2015;Knudsen & Knudsen, 1985;Mendonça, 2014;Irving & Moore, 2011).…”