Proceedings of the 14th ACM International Conference on Multimodal Interaction 2012
DOI: 10.1145/2388676.2388688
|View full text |Cite
|
Sign up to set email alerts
|

Automatic detection of pain intensity

Abstract: Previous efforts suggest that occurrence of pain can be detected from the face. Can intensity of pain be detected as well? The Prkachin and Solomon Pain Intensity (PSPI) metric was used to classify four levels of pain intensity (none, trace, weak, and strong) in 25 participants with previous shoulder injury (McMaster-UNBC Pain Archive). Participants were recorded while they completed a series of movements of their affected and unaffected shoulders. From the video recordings, canonical normalized appearance of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
121
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 149 publications
(121 citation statements)
references
References 27 publications
0
121
0
Order By: Relevance
“…At the same time, our results showed that external representation by sound can enhance patients' understanding of their own movements and breathing patterns (if embodied), and help with providing personalized explanations and advice, facilitating pacing and goal-setting. The supervisory support by the device could be further enhanced by using functionalities to automatically detect increased pain or more subtle cues of fear of pain from body cues (Aung et al, in press;Olugbade, Aung, Marquardt, De C. Williams, & Bianchi-Berthouze, 2014 and from facial expressions (Hammal & Cohn, 2012;Kaltwang, Rudovic, & Pantic, 2012;Meng & Bianchi-Berthouze, 2014;Romera-Paredes et al, 2013) and suggest or guide recalibration. Indeed, in a recent follow-up study we carried out on sensing wearable devices, people with CP confirmed the role of technology as a support to learning supervision skills and even to share such the supervisory role in real-life situation where the task at hand requires much attention (Felipe, Singh, Bradley, Williams, & Bianchi-Berthouze, 2015).…”
Section: Body Awareness Self-calibration and Wearable Device Can Facmentioning
confidence: 99%
“…At the same time, our results showed that external representation by sound can enhance patients' understanding of their own movements and breathing patterns (if embodied), and help with providing personalized explanations and advice, facilitating pacing and goal-setting. The supervisory support by the device could be further enhanced by using functionalities to automatically detect increased pain or more subtle cues of fear of pain from body cues (Aung et al, in press;Olugbade, Aung, Marquardt, De C. Williams, & Bianchi-Berthouze, 2014 and from facial expressions (Hammal & Cohn, 2012;Kaltwang, Rudovic, & Pantic, 2012;Meng & Bianchi-Berthouze, 2014;Romera-Paredes et al, 2013) and suggest or guide recalibration. Indeed, in a recent follow-up study we carried out on sensing wearable devices, people with CP confirmed the role of technology as a support to learning supervision skills and even to share such the supervisory role in real-life situation where the task at hand requires much attention (Felipe, Singh, Bradley, Williams, & Bianchi-Berthouze, 2015).…”
Section: Body Awareness Self-calibration and Wearable Device Can Facmentioning
confidence: 99%
“…The results of the proposed system are compared against two state-of-the-art pain detection systems of [7] and [8]. Following these two works, the obtained pain index of section 3.5, is classified into three different categories of no pain (if the pain index is zero), weak pain (if the pain index is either 1 or 2), severe pain (if the pain index is larger than or equal three).…”
Section: Resultsmentioning
confidence: 99%
“…In [10], three feature sets of Facial landmarks (PTS), Discrete Cosine Transform coefficients (DCT) and Local Binary Patterns (LBP) are extracted from the facial images, and are then fed to a Relevance Vector Regression (RVR) to estimate the pain intensity. In [7] canonical appearance of the face using AAM are passed through a set of lognormal filters to get a discriminative energy-based representation of the facial expression which is then used to estimate the pain level. Inspired by this work, in [8] another energy-based system has been developed for pain estimation which uses spatiotemporal filters.…”
Section: Related Workmentioning
confidence: 99%
“…Recent release of the pain-intensity coded data (Lucey et al 2011) has motivated research into automated estimation of the pain intensity levels (Hammal & Cohn 2012, Kaltwang et al 2012, Rudovic et al 2013a). For example, (Hammal & Cohn 2012) performed estimation of 4 pain intensity levels, with the levels greater than 3 on the 16-level scale being grouped together.…”
Section: Intensity Estimation Of Facial Expressionsmentioning
confidence: 99%
“…For example, (Hammal & Cohn 2012) performed estimation of 4 pain intensity levels, with the levels greater than 3 on the 16-level scale being grouped together. The authors applied Log-Normal filters to the normalized facial appearance to extract the image features, which were then used to train binary SVM classifiers for each pain intensity level, on a frame-by-frame basis.…”
Section: Intensity Estimation Of Facial Expressionsmentioning
confidence: 99%