2011
DOI: 10.1080/17470218.2010.516835
|View full text |Cite
|
Sign up to set email alerts
|

Audiovisual speech from emotionally expressive and lateralized faces

Abstract: Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

5
20
2

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(27 citation statements)
references
References 69 publications
5
20
2
Order By: Relevance
“…The results of Gordon and Hibberts [1] are inconsistent with those of a more recent study [4] that examined a larger number of emotions (angry, disgust, fear, happy, sad, surprise and neutral). Dupuis and Pichora-Fuller [4] found that the intelligibility of happy and sad speech did not differ from neutral speech.…”
Section: Introductioncontrasting
confidence: 62%
See 2 more Smart Citations
“…The results of Gordon and Hibberts [1] are inconsistent with those of a more recent study [4] that examined a larger number of emotions (angry, disgust, fear, happy, sad, surprise and neutral). Dupuis and Pichora-Fuller [4] found that the intelligibility of happy and sad speech did not differ from neutral speech.…”
Section: Introductioncontrasting
confidence: 62%
“…We considered two issues. (1). What acoustic correlate(s) of emotional expression best explains why speech expressing some emotions (e.g., happy) is more intelligible in noise than speech expressing other emotions (e.g., sad), even though each is mixed with noise at the same SNR.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The phonemic cues consist primarily of relatively fast spectral changes occurring within 50 ms speech segments, whereas the prosodic cues consist of slower spectral changes occurring over more than 200 ms speech segments (syllabic and suprasegmental range). Emotional speech confers processing advantages such as improved intelligibility in noise background as well as faster repetition time for words spoken with congruent emotional prosody (Nygaard and Queen, 2008; Gordon and Hibberts, 2011; Dupuis and Pichora-Fuller, 2014). …”
Section: Perception Of Emotional Spoken Wordsmentioning
confidence: 99%
“…Mullennix et al (2002) found that latencies for two tasks, matching judgment and phoneme identification in quiet, were negatively affected by variability in emotional prosody, suggesting that listeners benefit from repeated presentation of the same emotions. Gordon and Hibberts (2011) found that younger adults repeated sentences presented in noise more accurately when speech was spoken to portray happiness compared to sadness or neutral emotion. Gordon and Hibberts (2011) found that younger adults repeated sentences presented in noise more accurately when speech was spoken to portray happiness compared to sadness or neutral emotion.…”
Section: Introductionmentioning
confidence: 95%