2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.286
|View full text |Cite
|
Sign up to set email alerts
|

Personalized Automatic Estimation of Self-Reported Pain Intensity from Facial Expressions

Abstract: Pain is a personal, subjective experience that is commonly evaluated through visual analog scales (VAS). While this is often convenient and useful, automatic pain detection systems can reduce pain score acquisition efforts in large-scale studies by estimating it directly from the participants' facial expressions. In this paper, we propose a novel two-stage learning approach for VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs) to automatically estimate Prkachin and Solomon Pain Inte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
54
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 72 publications
(55 citation statements)
references
References 47 publications
1
54
0
Order By: Relevance
“…The metrics used to quantify the performance of automatic pain detection from facial expressions depend on the learning task. For classification tasks, metrics such as accuracy, F1 score, and area under Receiver Operating Rudovic et al [121] hidden conditional random field Lopez-Martinez et al [118] regularized multi-task learning Romera-Paredes et al [108] two-step SVM step1-AU: Lucey et al [83], Lucey et al [128] step2-pain: Bartlett et al [131] both steps: Littlewort et al [72], Littlewort et al [56], Ghasemi et al [74] logistical linear regression step2-pain: Lucey et al [83], Lucey et al [128] k-nearest neighbors step1-AU: Zafar and Khan [129] logistic regression step2-pain: Sikka et al [59] alignment-based learning step2-pain: Schmid et al [77], Siebers et al [78] hidden conditional random field step2-pain: Ghasemi et al [74] latent-dynamic conditional random field step1-AU: Zhang et al [76] regression one-step support vector regression Florea et al [111], Lopez-Martinez et al [118] ordinal support vector regression Zhao et al [114] relevance vector regression or its variants Kaltwang et al [107], Kaltwang et al [113], Egede et al [116], Egede and Valstar [117] random forest Kächele et al [103] linear regression Neshov and Manolova [94] ordinal support vector regression Zhao et al [114] NN Lopez-Martinez et al [118] Convolutional Neural Network (CNN) Wang et al [109] 3D CNN with kernels of varying temporal lengths Tavakolian and Hadid [119] recurrent CNN Zhou et al [112] LSTM recurrent neural network Rodriguez et al [115], Lopez-Martinez et al [118] two-step support vector regression step1-AU: B...…”
Section: Learning Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The metrics used to quantify the performance of automatic pain detection from facial expressions depend on the learning task. For classification tasks, metrics such as accuracy, F1 score, and area under Receiver Operating Rudovic et al [121] hidden conditional random field Lopez-Martinez et al [118] regularized multi-task learning Romera-Paredes et al [108] two-step SVM step1-AU: Lucey et al [83], Lucey et al [128] step2-pain: Bartlett et al [131] both steps: Littlewort et al [72], Littlewort et al [56], Ghasemi et al [74] logistical linear regression step2-pain: Lucey et al [83], Lucey et al [128] k-nearest neighbors step1-AU: Zafar and Khan [129] logistic regression step2-pain: Sikka et al [59] alignment-based learning step2-pain: Schmid et al [77], Siebers et al [78] hidden conditional random field step2-pain: Ghasemi et al [74] latent-dynamic conditional random field step1-AU: Zhang et al [76] regression one-step support vector regression Florea et al [111], Lopez-Martinez et al [118] ordinal support vector regression Zhao et al [114] relevance vector regression or its variants Kaltwang et al [107], Kaltwang et al [113], Egede et al [116], Egede and Valstar [117] random forest Kächele et al [103] linear regression Neshov and Manolova [94] ordinal support vector regression Zhao et al [114] NN Lopez-Martinez et al [118] Convolutional Neural Network (CNN) Wang et al [109] 3D CNN with kernels of varying temporal lengths Tavakolian and Hadid [119] recurrent CNN Zhou et al [112] LSTM recurrent neural network Rodriguez et al [115], Lopez-Martinez et al [118] two-step support vector regression step1-AU: B...…”
Section: Learning Methodsmentioning
confidence: 99%
“…Summary of spatial representations extracted directly from facial images for automatic pain detection. Berthouze[102], Ghasemi et al[74], Aung et al[27], Rupenga and Vadapalli[98], Liu et al[110], Lopez-Martinez et al[118] facial landmark distances Romera-Paredes et al[108], Meawad et al[99] facial landmark distances and angles Niese et al[54], Siebers et al[84] …”
mentioning
confidence: 99%
“…1} is a binary label and N is the number of samples in the dataset. Many classification tasks can be viewed as consisting of multiple correlated subtasks, such as pain detection in a particular patient subpopulation that responds to noxious stimuli in a characteristic way, similar to yet different from other patient subpopulations [7], [8], [14], [15]. Therefore, pooling all subtasks and treating them as a single task may not be appropriate.…”
Section: Personalized Machine Learning Modelmentioning
confidence: 99%
“…Personalized method [11] uses the facial point of face image as input and uses Bi-LSTM [12] to estimate the observed pain intensity (OPI) value. They build individual facial expressiveness score (I-FES) for each person and use Hidden Conditional Random Fields (HCRFs) to merge the sequence result to get personalized visual analog scales (VAS) estimation.…”
Section: Pain Intensity Regressionmentioning
confidence: 99%