2017
DOI: 10.14445/22312803/ijctt-v52p101
|View full text |Cite
|
Sign up to set email alerts
|

Detection and Analysis of Human Emotions through Voice and Speech Pattern Processing

Abstract: Abstract

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(15 citation statements)
references
References 7 publications
0
14
0
1
Order By: Relevance
“…Other research has shown that social media (Greyling et al 2019) and online search behavior can be used to monitor specific emotional states (Brodeur et al 2020;Ford et al 2018). Lab research has shown that emotions can be inferred from observation-based obtrusive measures, such speech characteristics (Dasgupta 2017;B. L. Smith et al 1975;Williams and Stevens 1972), combinations of acoustic variables (Banse and Scherer 1996) and voice pitch (Mauss and Robinson 2009).…”
Section: Emotionsmentioning
confidence: 99%
“…Other research has shown that social media (Greyling et al 2019) and online search behavior can be used to monitor specific emotional states (Brodeur et al 2020;Ford et al 2018). Lab research has shown that emotions can be inferred from observation-based obtrusive measures, such speech characteristics (Dasgupta 2017;B. L. Smith et al 1975;Williams and Stevens 1972), combinations of acoustic variables (Banse and Scherer 1996) and voice pitch (Mauss and Robinson 2009).…”
Section: Emotionsmentioning
confidence: 99%
“…Previous work done has only made attempts to qualitatively and quantitatively detect ASR errors but has not automatically correct these errors as only manual error correction for lexical error has been suggested (Schuller, 2018;Tang et al, 2019). The solution proposed to these aforementioned ASR problem is to build a large targeted dataset for quantifying the detected errors and automatically re-formulate these errors (Dasgupta, 2017;Schuller, 2018;Tang et al, 2019). The term re-formulation in the context of this study means automatically re-adjusting and re-sizing of speaker related errors i.e., user's acoustic irrational behavior during speech communication.…”
Section: Style Variabilitymentioning
confidence: 99%
“…It was then inferred that emotional state has direct influence or alter speech signals based on speech recognition accuracy, classification accuracy and standard deviation parameters. Dasgupta (2017) presented an algorithmic approach for detection of human emotions and quantitative analysis using voice and speech processing through several attributes which are pitch, timbre, loudness and time between words. The approach is based on three different emotional states (normal, angry and panicked) using a low sample data (two speech samples).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In solitude condition of prosody, the unusual effects of prosody are complicated to reproduce and also the analysing of prosody is difficult due to the function multiplicity [3]. The prosody generation module is the mandatory components of various speech processing software applications specifically businessrelated Text-to-Speech systems today make use of rather unsophisticated systems, characteristically conveying a defaulting sentence accent based on the function word distinction [4].…”
Section: Introductionmentioning
confidence: 99%