“…When objects within the field of view (FoV) of a 3D sensor are sonified in a manner that replicates the perception of sounds emanating from the location of the object, this process is also referred to as a “virtual acoustic space” (Eckert, Blex, & Friedrich, 2018; González-Mora, Rodriguez-Hernandez, Burunat, Martin, & Castellano, 2006; González-Mora, Rodriguez-Hernandez, Rodriguez-Ramos, Díaz-Saco, & Sosa, 1999; Rodríguez-Hernández et al., 2010). One modern incarnation of this is the “Synaestheatre” that converts a depth image from a 3D sensor into realistically spatialised sounds (Hamilton-Fletcher, Obrist, et al, 2016). The sounds are spatialised using head-related transfer function (HRTF), which describes how the spatial positions of the listener and sound source alter the received sounds in terms of interaural timing, intensity, as well as distortions created by the head and pinnae (Algazi, Avendano, & Duda, 2001; Kistler & Wightman, 1992; Kulkarni, Isabelle, & Colburn, 1999; Potisk, 2015).…”