Recognition of pain in patients who are incapable of expressing themselves allows for several possibilities of improved diagnosis and treatment. Despite the advancements that have already been made in this field, research is still lacking with respect to the detection of pain in live videos, especially under unfavourable conditions. To address this gap in existing research, the current study proposed a hybrid model that allowed for efficient pain recognition. The hybrid, which consisted of a combination of the Constrained Local Model (CLM), Active Appearance Model (AAM), and Patch-Based Model, was applied in conjunction with image algebra. This contributed to a system that enabled the successful detection of pain from a live stream, even with poor lighting and a low-resolution recording device. The final process and output allowed for memory for storage that was reduced up to 40%–55% and an improved processing time of 20%–25%. The experimental system met with success and was able to detect pain for the 22 analysed videos with an accuracy of 55.75%–100.00%. To increase the fidelity of the proposed technique, the hybrid model was tested on UNBC‐McMaster Shoulder Pain Database as well.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.