2019
DOI: 10.1101/613349
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cross-modal integration of reward value during oculomotor planning

Abstract: Reward value guides goal-directed behavior and modulates early sensory processing. Rewarding stimuli are often multisensory but it is not known how reward value is combined across sensory modalities. Here we show that the integration of reward value critically depends on whether the distinct sensory inputs are perceived to emanate from the same multisensory object. We systematically manipulated the congruency in monetary reward values and the relative spatial positions of co-occurring auditory and visual stimu… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 51 publications
(59 reference statements)
1
2
0
Order By: Relevance
“…Specifically, we found a reduction in N1 negativity after learning of the reward associations, which is different from an enhanced P1 positivity and N1 negativity that has been observed in studies of cross-modal attention (Busse et al, 2005; Störmer et al, 2009; Zimmer et al, 2010). One possible mechanism is a reward-driven enhancement of audiovisual integration, which is in line with recent findings demonstrating a role of reward in multisensory integration (Bean et al, 2021; Cheng et al, 2020). This mechanism can also explain the direction of ERP modulations, as previous studies found that an audiovisual stimulus elicits response modulations of visual ERPs mainly in N185 window, with a reduction of the negativity of N185 component compared to the unimodal stimuli (Giard and Peronnet, 1999), which is similar to the pattern of modulations we observed for cross-modal rewards.…”
Section: Discussionsupporting
confidence: 89%
See 1 more Smart Citation
“…Specifically, we found a reduction in N1 negativity after learning of the reward associations, which is different from an enhanced P1 positivity and N1 negativity that has been observed in studies of cross-modal attention (Busse et al, 2005; Störmer et al, 2009; Zimmer et al, 2010). One possible mechanism is a reward-driven enhancement of audiovisual integration, which is in line with recent findings demonstrating a role of reward in multisensory integration (Bean et al, 2021; Cheng et al, 2020). This mechanism can also explain the direction of ERP modulations, as previous studies found that an audiovisual stimulus elicits response modulations of visual ERPs mainly in N185 window, with a reduction of the negativity of N185 component compared to the unimodal stimuli (Giard and Peronnet, 1999), which is similar to the pattern of modulations we observed for cross-modal rewards.…”
Section: Discussionsupporting
confidence: 89%
“…A second possibility is that cross-modal reward cues enhance visual perception by strengthening the audiovisual integration of the auditory and visual components of an audiovisual stimulus. Although audiovisual integration largely occurs automatically (for a review see Calvert and Thesen, 2004), top-down factors such as attention (Alsius et al, 2005; Navarra et al, 2010; Talsma et al, 2007) and recently reward value (Bean et al, 2021; Cheng et al, 2020) have been shown to affect its strength. Through a more efficient integration with the visual cues, auditory reward signals could hence capture attention not only to themselves but also to the whole audiovisual object including the visual target, thereby improving performance.…”
Section: Introductionmentioning
confidence: 99%
“…Many studies of attentional bias that utilise auditory stimuli investigate cross-modal interfacing of visual and auditory systems and focus on how the addition of sound stimuli modulates visual processing (e.g., Anderson, 2016c;McDonald et al, 2000McDonald et al, , 2005Sanz et al, 2018;Stormer et al, 2009). Cheng et al (2020) argued that the reward value of visual and audio inputs are integrated together and that the associative value of vision dominates over the associative value of audition, highlighting a need to investigate learning-dependent auditory attentional capture in isolation.…”
Section: Introductionmentioning
confidence: 99%