2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII) 2017
DOI: 10.1109/acii.2017.8273618
|View full text |Cite
|
Sign up to set email alerts
|

Embedding stacked bottleneck vocal features in a LSTM architecture for automatic pain level classification during emergency triage

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
25
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 22 publications
(26 citation statements)
references
References 23 publications
1
25
0
Order By: Relevance
“…Another study [5] recognized the pain level based on speech analysis and the use of long short-term memory (LSTM) NNs. First, they employed an unsupervised learning NN to extract the vocal features using the Chinese corpus.…”
Section: Speech Analysismentioning
confidence: 99%
“…Another study [5] recognized the pain level based on speech analysis and the use of long short-term memory (LSTM) NNs. First, they employed an unsupervised learning NN to extract the vocal features using the Chinese corpus.…”
Section: Speech Analysismentioning
confidence: 99%
“…LSTM provides a very elegant way to deal with TSD through time that embodies the correlations among the TSD in time order [15], [16], and lots of research have presented the superiority of LSTM on TSD. In literature [17], researchers embedded bottleneck vocal features in LSTM architecture to measure the pain level through voice characteristics, and the results showed that the classification accuracy are higher than other comparisons. Reference [18] utilized LSTM to predict the coding unit splitting, and the experimental results demonstrated its advantages in high efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…While many research has already indicated that the facial muscle movements, i.e., action units, provide an indication of different pain levels [8,9], several recent works have started to investigate the relationship between pain intensity and vocal cues. For example, Oshrat et al analyzed the prosodic variation as a bio-signaling indicator of pain [10], Ren et al recently proposed a database for evaluating pain from speech [11], Tsai et al proposed several automated machine learning methods for recognizing self-reported pain-levels using speech and face multimodally in a real triage database [12,13]. These studies tend to focus more on the prosodic and spectral properties of speech.…”
Section: Introductionmentioning
confidence: 99%