As symptom clusters do not significantly differ between cancer and non-cancer patients, specific frequent symptoms in non-cancer patients should be assessed. Identification of symptom clusters may help to target therapies and focus the use of medications to improve patients' quality of life.
BackgroundGiven the unreliable self-report in patients with dementia, pain assessment should also rely on the observation of pain behaviors, such as facial expressions. Ideal observers should be well trained and should observe the patient continuously in order to pick up any pain-indicative behavior; which are requisitions beyond realistic possibilities of pain care. Therefore, the need for video-based pain detection systems has been repeatedly voiced. Such systems would allow for constant monitoring of pain behaviors and thereby allow for a timely adjustment of pain management in these fragile patients, who are often undertreated for pain.MethodsIn this road map paper we describe an interdisciplinary approach to develop such a video-based pain detection system. The development starts with the selection of appropriate video material of people in pain as well as the development of technical methods to capture their faces. Furthermore, single facial motions are automatically extracted according to an international coding system. Computer algorithms are trained to detect the combination and timing of those motions, which are pain-indicative.Results/conclusionWe hope to encourage colleagues to join forces and to inform end-users about an imminent solution of a pressing pain-care problem. For the near future, implementation of such systems can be foreseen to monitor immobile patients in intensive and postoperative care situations.
Over the last few decades, there has been an increasing call in the field of computer vision to use machinelearning techniques for the detection, categorization, and indexing of facial behaviors, as well as for the recognition of emotion phenomena. Automated Facial Expression Analysis has become a highly attractive field of competition for academic laboratories, startups and large technology corporations. This paper introduces the new Actor Study Database to address the resulting need for reliable benchmark datasets. The focus of the database is to provide real multi-view data, that is not synthesized through perspective distortion. The database contains 68-minutes of highquality videos of facial expressions performed by 21 actors. The videos are synchronously recorded from five different angles. The actors' tasks ranged from displaying specific Action Units and their combinations at different intensities to enactment of a variety of emotion scenarios. Over 1.5 million frames have been annotated and validated with the Facial Action Coding System by certified FACS coders. These attributes make the Actor Study Database particularly applicable in machine recognition studies as well as in psychological research into affective phenomena-whether prototypical basic emotions or subtle emotional responses. Two state-of-the-art systems were used to produce benchmark results for all five different views that this new database encompasses. The database is publicly available for non-commercial research.
Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.