2016 Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech) 2016
DOI: 10.1109/robomech.2016.7813150
|View full text |Cite
|
Sign up to set email alerts
|

Automatic spontaneous pain recognition using supervised classification learning algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

2
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 17 publications
2
6
0
Order By: Relevance
“…Based on these findings, it was observed that using suitable conditions, automated pain recognition systems to detect and identify pain had the potential to be extremely efficient. This highlighted their usefulness and corresponded to earlier studies [ 38 40 ].…”
Section: Discussionsupporting
confidence: 89%
See 1 more Smart Citation
“…Based on these findings, it was observed that using suitable conditions, automated pain recognition systems to detect and identify pain had the potential to be extremely efficient. This highlighted their usefulness and corresponded to earlier studies [ 38 40 ].…”
Section: Discussionsupporting
confidence: 89%
“…The extraction that occurred was faulty, and the HR was reduced. Lower accuracy was noted for faces that were not looking directly at the recording device, which matched the findings stated by Rupenga and Vadapalli [ 40 ]. Sikka et al reported that motion in front of the camera while recording reduced the accuracy of the system [ 43 ].…”
Section: Discussionsupporting
confidence: 87%
“…In this domain features can further be distinguished as (1) frame-level features vs. features that integrate information over time (time-window or video level), (2) geometric vs. appearance features, and (3) local vs. global features. A variety of frame-level features have been used for recognizing facial pain expression: (1) generic shape features (most often plain landmark coordinates) [1], [2], [5], [8], [12], [16], [17], [20]- [22], [25], [31], [33], [34], [38], [40], [42], [63], [89]; (2) generic appearance features, which include plain pixel representations ("SAPP", "CAPP", and similar) [1], [2], [8], [20], [21], [38], [43], [66], [73], Local Binary Pattern (LBP) [3], [12]- [14], [26], [30], [35], [39], [41], [42], [63], Histogram of Oriented Gradients (HOG) [4], [5], [14], [26], [60], [62], Gabor [18],…”
Section: Frame-level Facial Expression Featuresmentioning
confidence: 99%
“…Summary of the learning approaches that have been developed and tested for automatic pain detection from facial expressions.Brahnam et al[79], Monwar and Rezaei[80], Brahnam et al[81], Lu et al[51], Ashraf et al[82], Lucey et al[83], Siebers et al[84], Nanni et al[85], Gholami et al[86], Monwar and Rezaei[87], Wei and Li-min[88], Lucey et al[18], Lucey et al[89], Werner et al[90], Chen et al[91], Khan et al[92], Pedersen[93], Neshov and Manolova[94], Rathee and Ganotra[95], Aung et al[27], Kharghanian et al[96], Roy et al[97], Rupenga and Vadapalli[98], Meawad et al[99], Alphonse and Dharma[100] …”
mentioning
confidence: 99%
“…Summary of spatial representations extracted directly from facial images for automatic pain detection. Berthouze[102], Ghasemi et al[74], Aung et al[27], Rupenga and Vadapalli[98], Liu et al[110], Lopez-Martinez et al[118] facial landmark distances Romera-Paredes et al[108], Meawad et al[99] facial landmark distances and angles Niese et al[54], Siebers et al[84] …”
mentioning
confidence: 99%