2015 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES) 2015
DOI: 10.1109/spices.2015.7091377
|View full text |Cite
|
Sign up to set email alerts
|

Prosodic feature based speech emotion recognition at segmental and supra segmental levels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 5 publications
0
7
0
Order By: Relevance
“…Prior methods that use faces as input commonly track action units on the face such as points on the eyebrow, cheeks and lips (Fabian Benitez-Quiroz, Srinivasan, and Martinez 2016), or track eye movements (Schurgin et al 2014) and facial expressions (Majumder, Behera, and Subramanian 2014). Speechbased emotion perception methods use either spectral features or prosodic features like loudness of voice, difference in tones and changes in pitch (Jacob and Mythili 2015).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Prior methods that use faces as input commonly track action units on the face such as points on the eyebrow, cheeks and lips (Fabian Benitez-Quiroz, Srinivasan, and Martinez 2016), or track eye movements (Schurgin et al 2014) and facial expressions (Majumder, Behera, and Subramanian 2014). Speechbased emotion perception methods use either spectral features or prosodic features like loudness of voice, difference in tones and changes in pitch (Jacob and Mythili 2015).…”
Section: Related Workmentioning
confidence: 99%
“…Prior methods that use faces as input commonly track action units on the face such as points on the eyebrow, cheeks and lips [16], or track eye movements [39] and facial expressions [33]. Speech-based emotion perception methods use either spectral features or prosodic features like loudness of voice, difference in tones and changes in pitch [24]. With the rising popularity of deep learning, there is considerable work on developing learned features for emotion detection from large-scale databases of faces [51,53] and speech signals [12].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Emotion learning as an area of research is integral to a variety of domains, including human-computer interaction, robotics (Liu et al 2017) and affective computing (Yates et al 2017). Existing research in emotion recognition has leveraged aspects such as facial expressions (Liu et al 2017), speech (Jacob and Mythili 2015), gestures and gaits (Bhattacharya et al 2020a) to gauge an individual's emotional state. Studies in psychology indicate that humans perceive emotions by observing affective features such as arm swing rate, posture, and frequency of movements.…”
Section: Introductionmentioning
confidence: 99%
“…Humans perceive each other's emotions and moods through verbal cues such as speech [47,27] and text [56,12], as well as through non-verbal cues or affective features [48], including eye-movements [50], facial expressions [18], tone of voice, postures [4], and walking styles [32]. Understanding these perceived emotions shapes people's interactions and experiences, especially when performing tasks in collaborative or competitive environments [6].…”
Section: Introductionmentioning
confidence: 99%