In this study, we investigate the brain networks during positive and negative emotions for different types of stimulus (audio only, video only and audio + video) in [Formula: see text], and [Formula: see text] bands in terms of phase locking value, a nonlinear method to study functional connectivity. Results show notable hemispheric lateralization as phase synchronization values between channels are significant and high in right hemisphere for all emotions. Left frontal electrodes are also found to have control over emotion in terms of functional connectivity. Besides significant inter-hemisphere phase locking values are observed between left and right frontal regions, specifically between left anterior frontal and right mid-frontal, inferior-frontal and anterior frontal regions; and also between left and right mid frontal regions. ANOVA analysis for stimulus types show that stimulus types are not separable for emotions having high valence. PLV values are significantly different only for negative emotions or neutral emotions between audio only/video only and audio only/audio + video stimuli. Finding no significant difference between video only and audio + video stimuli is interesting and might be interpreted as that video content is the most effective part of a stimulus.
Differences in speech articulation among four emotion types, neutral, anger, sadness, and happiness are investigated by analyzing tongue tip, jaw, and lip movement data collected from one male and one female speaker of American English. The data were collected using an electromagnetic articulography (EMA) system while subjects produce simulated emotional speech. Pitch, root-mean-square (rms) energy and the first three formants were estimated for vowel segments. For both speakers, angry speech exhibited the largest rms energy and largest articulatory activity in terms of displacement range and movement speed. Happy speech is characterized by largest pitch variability. It has higher rms energy than neutral speech but articulatory activity is rather comparable to, or less than, neutral speech. That is, happy speech is more prominent in voicing activity than in articulation. Sad speech exhibits longest sentence duration and lower rms energy. However, its articulatory activity is no less than neutral speech. Interestingly, for the male speaker, articulation for vowels in sad speech is consistently more peripheral (i.e., more forwarded displacements) when compared to other emotions. However, this does not hold for female subject. These and other results will be discussed in detail with associated acoustics and perceived emotional qualities. [Work supported by NIH.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.