We investigated the frame of reference involved in audio-visual (AV) fusion over space. This multisensory phenomenon refers to the perception of unity resulting from visual and auditory stimuli despite their potential spatial disparity. The extent of this illusion depends on the eccentricity in azimuth of the bimodal stimulus (Godfroy et al, 2003 Perception 32 1233-1245). In a previous study, conducted in a luminous environment, Roumes et al 2004 (Perception 33 Supplement, 142) have shown that variation of AV fusion is gaze-dependent. Here we examine the contribution of ego- or allocentric visual cues by conducting the experiment in total darkness. Auditory and visual stimuli were displayed in synchrony with various spatial disparities. Subjects had to judge their unity ('fusion' or 'no fusion'). Results showed that AV fusion in darkness remains gaze-dependent despite the lack of any allocentric cues and confirmed the hypothesis that the reference frame of the bimodal space is neither head-centred nor eye-centred.
Neurophysiological tests probing the vestibulo-ocular, colic and spinal pathways are the gold standard to evaluate the vestibular system in clinics. In contrast, vestibular perception is rarely tested despite its potential usefulness in professional training and for the longitudinal follow-up of professionals dealing with complex man-machine interfaces, such as aircraft pilots. This is explored here using a helicopter flight simulator to probe the vestibular perception of pilots.The vestibular perception of nine professional helicopter pilots was tested using a full flight helicopter simulator. The cabin was tilted six times in roll and six times in pitch (−15°, −10°, −5°, 5°, 10° and 15°) while the pilots had no visual cue. The velocities of the outbound displacement of the cabin were kept below the threshold of the semicircular canal perception. After the completion of each movement, the pilots were asked to put the cabin back in the horizontal plane (still without visual cues). The order of the 12 trials was randomized with two additional control trials where the cabin stayed in the horizontal plane but rotated in yaw (−10° and +10°). Pilots were significantly more precise in roll (average error in roll: 1.15 ± 0.67°) than in pitch (average error in pitch: 2.89 ± 1.06°) (Wilcoxon signed-rank test: p < 0.01). However, we did not find a significant difference either between left and right roll tilts (p = 0.51) or between forward and backward pitch tilts (p = 0.59). Furthermore, we found that the accuracies were significantly biased with respect to the initial tilt. The greater the initial tilt was, the less precise the pilots were, although maintaining the direction of the tilt, meaning that the error can be expressed as a vestibular error gain in the ability to perceive the modification in the orientation. This significant result was found in both roll (Friedman test: p < 0.01) and pitch (p < 0.001). However, the pitch trend error was more prominent (gain = 0.77 vs gain = 0.93) than roll. This study is a first step in the determination of the perceptive-motor profile of pilots, which could be of major use for their training and their longitudinal follow-up. A similar protocol may also be useful in clinics to monitor the aging process of the otolith system with a simplified testing device.
Variation over space of Audio-Visual (AV) spatial fusion has been investigated in darkness and light conditions (Hartnagel et al., 2007; Roumes et al., 2004). Those experiments revealed a gaze shift effect, indicating a reference frame of AV fusion space being neither head- nor eye-centered. Results in vision research have shown influence of visual allocentric reference frame on visual localization. Schmidt et al. (2003) have shown local distortion effect of visual landmark and experiments on the Reolof effect have shown shift of localization relative to the asymmetric surrounding display (Dassonville et al., 2004). Our experiment investigates visual allocentric effects on AV fusion. An hemi-cylindrical screen hiding 21 loudspeakers in a 2D arrangement, a projector displays on the screen a green permanent rectangular large background (135°H × 80°V); participant (head and body aligned) was sideways-oriented so the display appeared shifted 15° to the right relative to straight ahead and the surrounding visual frame was asymmetrical (frame offset). In each trial a vertical line providing a visual landmark randomly either straight ahead (head 0°) or 15° to the right (mid-display), a broadband noise burst and a 1° spot of light, 500 ms duration, were simultaneously presented with random 2D spatial disparity. The task was about perception of spatial unity of the bimodal stimulus (fusion). Results showed that AV fusion depends mainly on egocentric reference frames relative position (gaze and head) and that local allocentric reference frame has no significant effect. Comparisons with previous results confirm the importance of surrounding visual display.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.