2019
DOI: 10.1101/563080
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Characteristic sounds facilitate object search in real-life scenes

Abstract: Real-world multisensory events do not only provide temporal and spatially correlated information, but also semantic correspondences about object identity. Semantically consistent sounds can enhance visual detection, identification and search performance, but these effects are always demonstrated in simple and stereotyped displays that lack ecological validity. In order to address identity-based crossmodal relationships in real world scenarios, we designed a visual search task using complex, dynamic scenes.Part… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
9
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 53 publications
2
9
0
Order By: Relevance
“…In Experiment 1 (Experiments 1B and 1C) characteristic sounds speeded up search times for the semantically corresponding visual target in a visual search task. This result is in agreement with the idea that cross-modal semantic congruence can attract spatial attention and confirms prior results (Iordanescu et al, 2008(Iordanescu et al, , 2010Knoeferle et al, 2016;Kvasova et al, 2019). In Experiment 1B, distractor consistent sounds did not slow down responses compared to neural sounds, suggesting that audio-visual congruence benefits goal-directed processes, but not the processing of other potential objects.…”
Section: Discussionsupporting
confidence: 91%
See 2 more Smart Citations
“…In Experiment 1 (Experiments 1B and 1C) characteristic sounds speeded up search times for the semantically corresponding visual target in a visual search task. This result is in agreement with the idea that cross-modal semantic congruence can attract spatial attention and confirms prior results (Iordanescu et al, 2008(Iordanescu et al, , 2010Knoeferle et al, 2016;Kvasova et al, 2019). In Experiment 1B, distractor consistent sounds did not slow down responses compared to neural sounds, suggesting that audio-visual congruence benefits goal-directed processes, but not the processing of other potential objects.…”
Section: Discussionsupporting
confidence: 91%
“…(2008Iordanescu. ( , 2010 and also Knoeferle et al (2016) and Kvasova et al (2019). Because the effect of crossmodal semantic congruence was found only when sound was presented 100ms before the visual stimuli we decided to use SOA100ms in all the following experiments.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…DK was supported by an FI scholarship, from the AGAUR Generalitat de Catalunya. This manuscript has been released as a pre-print at bioRxiv (Kvasova et al, 2019).…”
Section: Fundingmentioning
confidence: 99%
“…In an auditory context, two speech recordings might be considered semantically related if each was spoken by the same speaker. The source based definition has also been widely used, especially in multisensory contexts, with studies finding that sounds speed search for shared-source images (Iordanescu et al 2008) and videos (Kvasova, Garcia-Vernet, and Soto-Faraco 2019) and improve memory for shared-source objects (Heikkilä et al 2015), even when task irrelevant (Duarte, Ghetti, and Geng 2021), and images improve memory for shared-source sounds (Moran et al 2013). Ostensibly, these studies and the studies described above using the semantics-ascategory definition investigate same aspect of sensory events, semantics, and depend on shared mechanisms of semantic processing.…”
Section: Introductionmentioning
confidence: 99%