2011
DOI: 10.1037/a0024329
|View full text |Cite
|
Sign up to set email alerts
|

Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.

Abstract: We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

25
129
2

Year Published

2012
2012
2020
2020

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 87 publications
(156 citation statements)
references
References 72 publications
(153 reference statements)
25
129
2
Order By: Relevance
“…Although format effects were not evident in the numerical ratings of the distractor items, we did observe format differences in the RTs. These format differences are consistent with past research in our laboratory, with the research of others who found environmental sounds to be more difficult to identify than other stimulus formats (Saygin, Dick, & Bates, 2005), and with the research of Chen and Spence (2011). They suggested that 200-350 ms are needed for an auditory stimulus to access its semantic representations, and this time is relatively longer than the time needed when pictures are used as the stimuli (Potter, 1975;Thorpe, Fize, & Marlot, 1996).…”
Section: Discussionsupporting
confidence: 79%
See 2 more Smart Citations
“…Although format effects were not evident in the numerical ratings of the distractor items, we did observe format differences in the RTs. These format differences are consistent with past research in our laboratory, with the research of others who found environmental sounds to be more difficult to identify than other stimulus formats (Saygin, Dick, & Bates, 2005), and with the research of Chen and Spence (2011). They suggested that 200-350 ms are needed for an auditory stimulus to access its semantic representations, and this time is relatively longer than the time needed when pictures are used as the stimuli (Potter, 1975;Thorpe, Fize, & Marlot, 1996).…”
Section: Discussionsupporting
confidence: 79%
“…Those studies, however, used a detection task with simple audiovisual stimuli (such as the presence of a tone or light). With more complex identification tasks, like the concept identification task that we used, visual targets are processed more quickly than sounds, because visual information is available all at once whereas auditory stimuli unfold over time, and when participants are asked to process the content of the information, this takes longer with auditory stimuli, as demonstrated by the findings from Experiments 1 and 3, and as had been found by other researchers who compared environmental sounds and pictures (Chen & Spence, 2011;Saygin et al, 2005).…”
Section: Discussionsupporting
confidence: 66%
See 1 more Smart Citation
“…In particular, Irwin (1996, 2000; see also Henderson, 1994) showed that repetition of a concept from prime to probe (e.g., the word FISH is presented in the prime, whereas the picture of a fish is shown in the probe) lead to object-specific facilitation effects; in turn, it was argued that the representations of stimuli in object files consist not only of perceptual features, but also of identity or conceptual features. In the same vein, research on cross-modal congruency showed that hearing the irrelevant sound of a dog will facilitate identifying the picture of a dog (Chen & Spence, 2010; see also Chen & Spence, 2011;Laurienti, Kraft, Maldjian, Burdette, & Wallace, 2004). This finding suggests that irrelevant stimuli presented in a different modality than the target are processed up to a conceptual level and can then facilitate responding to the target.…”
mentioning
confidence: 85%
“…However, as argued recently by Otto & Mamassian [99], the results of such studies do not necessarily show that sensory evidence is integrated multisensorially prior to the participant making a perceptual decision, but rather may have been accumulated separately for each signal (see also [100]). In addition, Chen & Spence's [101] study demonstrating that simultaneously presented semantically congruent pictures and sounds can modulate a participant's response criterion without necessarily impacting on their perceptual sensitivity (as indexed by d 0 ) in a picture-detection task provides another good example here. One thing to investigate is whether individuals would then be happy to report having been aware of multisensory objects or events.…”
Section: (B) Subjective Reports Do Not Establish the Occurrence Of Mumentioning
confidence: 99%