Reward value guides goal-directed behavior and modulates early sensory processing. Rewarding stimuli are often multisensory, but it is not known how reward value is combined across sensory modalities. Here we show that the integration of reward value critically depends on whether the distinct sensory inputs are perceived to emanate from the same multisensory object. We systematically manipulated the congruency in monetary reward values and the relative spatial positions of co-occurring auditory and visual stimuli that served as bimodal distractors during an oculomotor task performed by healthy human participants (male and female). The amount of interference induced by the distractors was used as an indicator of their perceptual salience. Our results across two experiments show that when reward value is linked to each modality separately, the value congruence between vision and audition determines the combined salience of the bimodal distractors. However, the reward value of vision wins over the value of audition if the two modalities are perceived to convey conflicting information regarding the spatial position of the bimodal distractors. These results show that in a task that highly relies on the processing of visual spatial information, the reward values from multiple sensory modalities are integrated with each other, each with their respective weights. This weighting depends on the strength of prior beliefs regarding a common source for incoming unisensory signals based on their congruency in reward value and perceived spatial alignment. Significance StatementReal-world objects are typically multisensory, but it is not known how reward value is combined across sensory modalities. We examined how the eye movements toward a visual target are modulated by the reward value of audiovisual distractors. Our results show that in the face of uncertainty as to whether co-occurring visual and auditory inputs belong to the same object, congruence in their reward values is used to guide audiovisual integration. However, when a strong prior exists to assume that unisensory inputs do not emanate from the same object, the associative value of vision dominates over audition. These results demonstrate that our brain uses a reward-sensitive, flexible weighting mechanism to decide whether incoming sensory signals should be combined or not.
Reward value guides goal-directed behavior and modulates early sensory processing. Rewarding stimuli are often multisensory but it is not known how reward value is combined across sensory modalities. Here we show that the integration of reward value critically depends on whether the distinct sensory inputs are perceived to emanate from the same multisensory object. We systematically manipulated the congruency in monetary reward values and the relative spatial positions of co-occurring auditory and visual stimuli that served as bimodal distractors during an oculomotor task. The amount of interference induced by the distractors was used as an indicator of their perceptual salience. Our results across two experiments show that when reward value is linked to each modality separately, the value congruence between vision and audition determines the combined salience of the bimodal distractors. However, reward value of vision wins over the value of audition if visual and auditory stimuli have been experienced as belonging to the same audiovisual object prior to the learning of the reward values. The perceived spatial alignment of auditory and visual stimuli is a prerequisite for the integration of their reward values, as no effect of reward value was observed when the two modalities were perceived to be misaligned. These results show that in a task that highly relies on the processing of visual spatial information, the reward values from multiple sensory modalities are integrated with each other, each with their respective weights. This weighting depends on the congruency in reward values, exposure history, and spatial co-localization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.