We investigated age-related effects in cross-modal interactions using tasks assessing spatial perception and object perception. Specifically, an audio-visual object identification task and an audio-visual object localisation task were used to assess putatively distinct perceptual functions in four age groups: children (8-11 years), adolescents (12-14 years), young and older adults. Participants were required to either identify or locate target objects. Targets were specified as unisensory (visual/auditory) or multisensory (audio-visual congruent/audio-visual incongruent) stimuli. We found age-related effects in performance across both tasks. Both children and older adults were less accurate at locating objects than adolescents or young adults. Children were also less accurate at identifying objects relative to young adults, but the performance between young adults, adolescents and older adults did not differ. A greater cost in accuracy for audio-visual incongruent relative to audio-visual congruent targets was found for older adults, children and adolescents relative to young adults. However, we failed to find a benefit in performance for any age group in either the identification or localisation task for audio-visual congruent targets relative to visual-only targets. Our findings suggest that visual information dominated when identifying or localising audio-visual stimuli. Furthermore, on the basis of our results, object identification and object localisation abilities seem to mature late in development and that spatial abilities may be more prone to decline as we age relative to object identification abilities. In addition, the results suggest that multisensory facilitation may require more sensitive measures to reveal differences in cross-modal interactions across higher-level perceptual tasks.
The current study examined the role of vision in spatial updating and its potential contribution to an increased risk of falls in older adults. Spatial updating was assessed using a path integration task in fall-prone and healthy older adults. Specifically, participants conducted a triangle completion task in which they were guided along two sides of a triangular route and were then required to return, unguided, to the starting point. During the task, participants could either clearly view their surroundings (full vision) or visuo-spatial information was reduced by means of translucent goggles (reduced vision). Path integration performance was measured by calculating the distance and angular deviation from the participant's return point relative to the starting point. Gait parameters for the unguided walk were also recorded. We found equivalent performance across groups on all measures in the full vision condition. In contrast, in the reduced vision condition, where participants had to rely on interoceptive cues to spatially update their position, fall-prone older adults made significantly larger distance errors relative to healthy older adults. However, there were no other performance differences between fall-prone and healthy older adults. These findings suggest that fall-prone older adults, compared to healthy older adults, have greater difficulty in reweighting other sensory cues for spatial updating when visual information is unreliable.
Our understanding of human perception has developed significantly over the last 50 years, informed by research in neurophysiology, behavioural studies, psychophysics and neuroimaging. When the Department of Psychology at Trinity College Dublin was founded 50 years ago, teaching and research in perception was based on each sense in isolation, with a strong focus on vision. Recent research has revealed that perception in one sensory modality can be significantly modified by inputs from the other senses. Moreover, such cross-sensory interactions seem to occur much earlier in information processing than was historically assumed. Here we highlight some of the main studies that best demonstrate how research in multisensory perception has enhanced our understanding of how the human brain processes information from the external world. In particular, we focus on higher-level perceptual tasks such as object, face, and body perception, and the perception of socially meaningful information, such as emotion and attractiveness. We also explore how changes in multisensory processing occur throughout the lifespan. We argue that a multisensory approach provides us with a better insight into the functional properties of the perceptual brain
This study investigated whether performance in recognising and locating target objects benefited from the simultaneous presentation of a crossmodal cue. Furthermore, we examined whether these ‘what’ and ‘where’ tasks were affected by developmental processes by testing across different age groups. Using the same set of stimuli, participants conducted either an object recognition task, or object location task. For the recognition task, participants were required to respond to two of four target objects (animals) and withhold response to the remaining two objects. For the location task, participants responded when an object occupied either of two target locations and withheld response if the object occupied a different location. Target stimuli were presented either by vision alone, audition alone, or bimodally. In both tasks cross-modal cues were either congruent or incongruent. The results revealed that response time performance in both the object recognition task and in the object location task benefited from the presence of a congruent cross-modal cue, relative to incongruent or unisensory conditions. In the younger adult group, the effect was strongest for response times although the same pattern was found for accuracy in the object location task but not for the recognition task. Following recent studies on multisensory integration in children (e.g., Brandwein, 2010; Gori, 2008), we then tested performance in children (i.e., 8–14 year olds) using the same task. Although overall performance was affected by age, our findings suggest interesting parallels in the benefit of congruent, cross-modal cues between children and adults, for both object recognition and location tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.