2013
DOI: 10.1163/22134808-00002420
|View full text |Cite
|
Sign up to set email alerts
|

The Time-Course of the Cross-Modal Semantic Modulation of Visual Picture Processing by Naturalistic Sounds and Spoken Words

Abstract: The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

3
37
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
6
1
1

Relationship

5
3

Authors

Journals

citations
Cited by 27 publications
(41 citation statements)
references
References 40 publications
3
37
1
Order By: Relevance
“…Consistent with the findings in our control group, the likelihood of auditory and visual signals being integrated into a unified percept is greatest when auditory and visual signals are spatially and temporarily aligned within 100 ms (e.g., Lewald and Guski 2003 ), nevertheless, multisensory stimuli can be integrated into unified objects or events even with temporal disparities up to 800 ms (Wallace et al 2004b ). Interestingly, similar to our case with PCA but not our control participants, using a speeded picture categorization task, Chen and Spence ( 2013 ) showed multisensory semantic congruence effects only when the auditory stimulus preceded the visual signal by 240 ms or more. In contrast, with SOAs within 100 ms an inhibitory effect was found irrespective of semantic congruence.…”
Section: Discussionsupporting
confidence: 87%
See 1 more Smart Citation
“…Consistent with the findings in our control group, the likelihood of auditory and visual signals being integrated into a unified percept is greatest when auditory and visual signals are spatially and temporarily aligned within 100 ms (e.g., Lewald and Guski 2003 ), nevertheless, multisensory stimuli can be integrated into unified objects or events even with temporal disparities up to 800 ms (Wallace et al 2004b ). Interestingly, similar to our case with PCA but not our control participants, using a speeded picture categorization task, Chen and Spence ( 2013 ) showed multisensory semantic congruence effects only when the auditory stimulus preceded the visual signal by 240 ms or more. In contrast, with SOAs within 100 ms an inhibitory effect was found irrespective of semantic congruence.…”
Section: Discussionsupporting
confidence: 87%
“…Differences in findings between studies could be due to the fact that we used a simple detection rather than an image categorization task. Indeed, the spatial and temporal properties of multisensory integration have a complex relationship dependent not only on the task at hand, but the type of signals being integrated (e.g., novel signals vs. naturalistic images and sounds) (e.g., Chen and Spence 2013 , 2017 ; Spence 2013 ; Stevenson et al 2012 ). In this case with PCA, given that the primary visual and posterior parietal brain regions are affected, a degradation or lack of visual information from these brain regions may have led to the broadening of the temporal integration window for multisensory signals to compensate for the loss of visual information.…”
Section: Discussionmentioning
confidence: 99%
“…The Y-axis represents the spatiotemporal disparity between the stimuli. The effects of crossmodal correspondences and semantic congruency often occur between stimuli in a larger temporal disparity over hundreds of ms that are represented as two distinct events, such as the studies demonstrating crossmodal semantic priming ( Chen and Spence, 2011b , 2013 ). The unity effect attributable to crossmodal correspondences or semantic congruency has, though, only been observed when the stimuli were presented in a range within 100 ms ( Vatakis and Spence, 2007 ; Parise and Spence, 2009 ).…”
Section: Factors Leading To the Unity Assumptionmentioning
confidence: 99%
“…Multisensory processes underlie many stages of information processing (Calvert, ; Driver & Noesselt, ). However, the nature of the neural mechanisms involved and their developmental trajectories are largely unknown, though it is apparent that the neural mechanisms depend on the sensory signal type (learnt or novel; Chen & Spence, , ; Laine, Kwon, & Hamalainen, ; Molholm, Ritter, Javitt, & Foxe, ; Raij, Uutela, & Hari, ). Further, these mechanisms also depend on whether sensory signals are merged into a unified percept (i.e., multisensory integration) or whether specific sensory features are linked over time (e.g., transferring or matching information across the sensory systems, associative learning, etc.…”
mentioning
confidence: 99%