To advance our understanding about the emotional and cognitive deficits of patients with frontotemporal dementia with behavioral variant (bvFTD), the current study examined comprehension and expression of emotions from prosodic and facial cues in a 66-year old woman. The patient diagnosed with bvFTD compared to six patients with acute right hemisphere stroke. Recognition of emotion from prosodic cues was assessed using an identification task in four conditions with decreasing verbal demands (neutral sentences, language-like pseudo sentences, monosyllables, and asyllabic vowel sounds). Repetition of utterances with emotional connotations and self-generated conversations were analyzed to measure relative changes in mean fundamental frequency (f0), f0 variance, speech rate, and intensity along with the facial musculature pattern. The patient showed a marked deficit in identifying emotions in all four prosody conditions; and she did not show much variation in modulating mean f0, f0 variance, speech rate and intensity for all emotion categories when compared to neutral utterances. In addition, this patient demonstrated little to no facial expressions during emotionally-provoking tasks, but demonstrated no difficulty recognizing emotions from facial expressions or verbal scenarios. Results show that the patient seems to have selective impairment in recognition of emotions from prosody and expression of emotions using both prosodic and facial features. Impaired processing of emotional prosody and facial expressions could be important for detecting bvFTD with greater right hemisphere atrophy.
Based on the hypothesis that emotion expression is in large part biologically determined (“universal”), this study examined whether spoken utterances conveying seven emotions (anger, disgust, fear, sadness, happiness, surprise, and neutral) demonstrate similar acoustic patterns in four distinct languages (English, German, Hindi, and Arabic). Emotional pseudoutterances (the dirms are in the cindabal) were recorded by four native speakers of each language using an elicitation paradigm. Across languages, approximately 2500 utterances, which were perceptually identified as communicating the intended target emotion, were analyzed for three acoustic parameters: f0Mean, f0Range, and speaking rate. Combined variance in the three acoustic measures contributed significantly to differences among the seven emotions in each language, although f0Mean played the largest role for each language. Disgust, sadness, and neutral were always produced with a low f0Mean, whereas surprise (and usually fear and anger) exhibited an elevated f0Mean. Surprise displayed an extremely wide f0Range and disgust exhibited a much slower speaking rate than the other emotions in each language. Overall, the acoustic measures demonstrated many similarities among languages consistent with the notion of universal patterns of vocal emotion expression, although certain emotions were poorly predicted by the three acoustic measures and probably rely on additional acoustic parameters for perceptual recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.