2017
DOI: 10.1121/1.4991328
|View full text |Cite
|
Sign up to set email alerts
|

Beyond lexical meaning: The effect of emotional prosody on spoken word recognition

Abstract: This study employs an auditory-visual associative priming paradigm to test whether non-emotional words uttered in emotional prosody (e.g., pineapple spoken in angry prosody or happy prosody) facilitate recognition of semantically emotional words (e.g., mad, upset or smile, joy). The results show an affective priming effect between emotional prosody and emotional words independent of lexical carriers of the prosody. Learned acoustic patterns in speech (e.g., emotional prosody) map directly to social concepts an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(17 citation statements)
references
References 17 publications
3
14
0
Order By: Relevance
“…Additionally, face showed the least interference in accuracy with the other two channels, and prosody demonstrated fewer interferences from semantics than the other way around. As predicted in Hypothesis 1, these results confirmed the communicative advantages of nonverbal signals (Beall & Herbert, 2008;Ben-David et al, 2016;Brazo et al, 2014;Filippi et al, 2017;Kim & Sumner, 2017;Kitayama & Ishii, 2002;Lin et al, 2018;Schirmer & Kotz, 2003), and replicated the Stroop effects of paralinguistic emotional cues on linguistic ones in our preceding work on multisensory integration with a larger sample size (i.e., Lin & Ding, 2019;Lin et al, 2020). While participants from a western society (e.g., Germany, the Netherlands, and the United States) tend to demonstrate a linguistic advantage in emotional speech perception (Ishii et al, 2003;Kitayama & Ishii, 2002;Kotz & Paulmann, 2007;Pell et al, 2011), our study aligns with previous studies conducted among Asian participants (e.g., Japanese and Filipinos) by revealing a reverse paralinguistic advantage among Mandarin-speaking Chinese (Ishii et al, 2003;Kitayama & Ishii, 2002).…”
Section: Perceptual Asymmetries Of Verbal and Nonverbal Cues In Emotion Cognitionsupporting
confidence: 80%
See 1 more Smart Citation
“…Additionally, face showed the least interference in accuracy with the other two channels, and prosody demonstrated fewer interferences from semantics than the other way around. As predicted in Hypothesis 1, these results confirmed the communicative advantages of nonverbal signals (Beall & Herbert, 2008;Ben-David et al, 2016;Brazo et al, 2014;Filippi et al, 2017;Kim & Sumner, 2017;Kitayama & Ishii, 2002;Lin et al, 2018;Schirmer & Kotz, 2003), and replicated the Stroop effects of paralinguistic emotional cues on linguistic ones in our preceding work on multisensory integration with a larger sample size (i.e., Lin & Ding, 2019;Lin et al, 2020). While participants from a western society (e.g., Germany, the Netherlands, and the United States) tend to demonstrate a linguistic advantage in emotional speech perception (Ishii et al, 2003;Kitayama & Ishii, 2002;Kotz & Paulmann, 2007;Pell et al, 2011), our study aligns with previous studies conducted among Asian participants (e.g., Japanese and Filipinos) by revealing a reverse paralinguistic advantage among Mandarin-speaking Chinese (Ishii et al, 2003;Kitayama & Ishii, 2002).…”
Section: Perceptual Asymmetries Of Verbal and Nonverbal Cues In Emotion Cognitionsupporting
confidence: 80%
“…While a wealth of studies converge on the congruence-induced facilitation effect in emotion processing (Barnhart et al, 2018;Lin et al, 2020;McGurk & MacDonald, 1976;Pell, 2005;Schirmer et al, 2005;Schwartz & Pell, 2012), there have been mixed findings in the literature concerning the sensory dominance of communication channels. Some researchers found a processing bias toward linguistic information (Kitayama & Ishii, 2002;Pell et al, 2011), whereas others claimed the predominance of paralinguistic signals, including visual (Lin & Ding, 2019;Spence et al, 2011) and auditory prosodic (Ben-David et al, 2016;Filippi et al, 2017;Kim & Sumner, 2017;Mehrabian & Wiener, 1967;Schirmer & Kotz, 2003) cues. These findings align with studies using nonaffective stimuli (Green & Barber, 1981), suggesting that the magnitude of Stroop effects varies across verbal and nonverbal contents.…”
Section: Stroop Effects In Emotion Perceptionmentioning
confidence: 99%
“…This has also to be considered in the interpretation of the present findings. However, several studies presenting spoken prime words used comparable timing as in the present study and reported priming effects (Holcomb and Neville ( 1990 ) SOA = 1,420–1,850 ms, ISI = 1,150 ms; Voyer and Myles ( 2017 ) SOA = 800–1,000 ms, ISI = 50–250 ms; Kim and Sumner ( 2017 ) SOA = not indicated, ISI = 100 ms; Bacovcin et al ( 2017 ) SOA = not indicated, ISI = 400–600 ms). Holcomb and Neville ( 1990 ) even reported stronger priming effects for auditory (SOA 1,420–1,850 ms) than visual prime words (SOA 1,550 ms), which had even longer SOA.…”
Section: Discussionmentioning
confidence: 55%
“…Because natural speech is inherently a dynamic stimulus, following one specific talker among competing talkers involves using the patterns of variation of the target talker's voice to anticipate that talker's speech. Prosodic information, which involves the plausible variation in vocal pitch, intensity, and timing, may be particularly useful in that regard (e.g., Calandruccio et al, 2019 ; Kim & Sumner, 2017 ; Zekveld et al, 2014 ). Of the cues that comprise prosody, variation in level has received less attention as a potential cue in CPP communication situations than other factors, such as pitch and intonation.…”
Section: Introductionmentioning
confidence: 99%