2017
DOI: 10.3758/s13414-017-1425-3
|View full text |Cite
|
Sign up to set email alerts
|

Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

Abstract: Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 48 publications
1
2
0
Order By: Relevance
“…The notion that word processing is supported by multiple brain areas is consistent with other theories that also assume a distributed neural representation of lexical knowledge (e.g., Elman, 2009;Goldinger, 2007;Gow, 2012) and it is in line with findings showing that language processing involves the integration of multiple sources of information (e.g., Bakker, Takashima, van Hell, Janzen, & McQueen, 2014;Hauk, Johnsrude, & Pulvermüller, 2004;Pufahl & Samuel, 2014;Strori, Zaar, Cooke, & Mattys, 2018;Taft, Castles, Davis, Lazendic, & Nguyen-Hoan, 2008;van Berkum, 2008;Viebahn, Ernestus, & McQueen, 2015). Crucially, however, in some cases the integration of information from multiple sources can occur very rapidly and does not necessarily require an offline mode of processing (McGurk & MacDonald, 1976;Mitterer & Reinisch, 2017;Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995).…”
Section: Discussionsupporting
confidence: 89%
“…The notion that word processing is supported by multiple brain areas is consistent with other theories that also assume a distributed neural representation of lexical knowledge (e.g., Elman, 2009;Goldinger, 2007;Gow, 2012) and it is in line with findings showing that language processing involves the integration of multiple sources of information (e.g., Bakker, Takashima, van Hell, Janzen, & McQueen, 2014;Hauk, Johnsrude, & Pulvermüller, 2004;Pufahl & Samuel, 2014;Strori, Zaar, Cooke, & Mattys, 2018;Taft, Castles, Davis, Lazendic, & Nguyen-Hoan, 2008;van Berkum, 2008;Viebahn, Ernestus, & McQueen, 2015). Crucially, however, in some cases the integration of information from multiple sources can occur very rapidly and does not necessarily require an offline mode of processing (McGurk & MacDonald, 1976;Mitterer & Reinisch, 2017;Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995).…”
Section: Discussionsupporting
confidence: 89%
“…Prior to statistical analyses, trial‐by‐trial RTs which were smaller than 250 msecs and over 2 SD s from the mean RT for each condition were considered outliers and replaced by the mean (cf. Navarrete et al, 2010; Strori et al, 2018). This procedure excluded 0.005% of the data for the monolingual group and 0.006% for the bilingual group.…”
Section: Resultsmentioning
confidence: 97%
“…In addition to replicating previously described talker-specificity effects, these findings helped demonstrate that not all sources of stimulus variability are encoded equally well in memory. Another line of inquiry has sought to investigate the encoding of non-speech sounds in memory traces of spoken words, ranging from broadband aperiodic noise to environmental sounds such as barking dogs and ringing telephones (e.g., Cooper, Brouwer, & Bradlow, 2015;Creel, Aslin, & Tanenhaus, 2012;Pufahl & Samuel, 2014;Strori, Zaar, Cooke, & Mattys, 2018). Results from this body of work indicate that while memory representations of words can certainly contain non-speech acoustic information, such information is encoded more strongly when it is integral to the speech signal.…”
Section: Introductionmentioning
confidence: 99%