Recognition of pain in equines (such as horses and donkeys) is essential for their welfare. However, this assessment depends solely on the ability of the observer to locate visible signs of pain since there is no verbal communication. The use of Grimace scales is proven to be efficient in detecting pain but is time-consuming and also dependent on the level of training of the annotators and, therefore, validity is not easily ensured. There is a need for automation of this process to help training. This work provides a system for pain prediction in horses, based on Grimace scales. The pipeline automatically finds landmarks on horse faces before classification. Our experiments show that using different classifiers for different poses of the horse is necessary, and fusion of different features improves results. We furthermore investigate the transfer of horse-based models for donkeys and illustrate the loss of accuracy in automatic landmark detection and subsequent pain prediction.
Advances in animal motion tracking and pose recognition have been a game changer in the study of animal behavior. Recently, an increasing number of works go 'deeper' than tracking, and address automated recognition of animals' internal states such as emotions and pain with the aim of improving animal welfare, making this a timely moment for a systematization of the field. This paper provides a comprehensive survey of computer vision-based research on recognition of affective states and pain in animals, addressing both facial and bodily behavior analysis. We summarize the efforts that have been presented so far within this topic -classifying them across different dimensions, highlight challenges and research gaps, and provide best practice recommendations for advancing the field, and some future directions for research.
Pain in farm animals harms the economics of farming and affects animal welfare. However, prey animals tend to not openly express signs of weakness, making the pain assessment process difficult. We propose a novel hierarchical model for disease progression evaluation, adapted for a wide range of head poses, according to which relevant information is extracted. A fine-tuned CNN is applied for face detection, followed by a CNN-based pose estimation and pose-informed landmark location method. Then multi-modal features are extracted, combining the appearance of regions-of-interest, described using a Histogram of Oriented Gradients, with geometric features and the pose values, leading to a binary Support Vector Machine classifier. To evaluate the efficiency of the complete pipeline, videos of the same sheep recorded at initial and advanced stages of treatment were tested, showing a decrease in the average pain score detected. The pain evaluation method significantly outperformed the existing state-of-theart approach, being the first to apply a pose-based feature extraction in sheep pain detection.
Computational technologies have revolutionized the archival sciences field, prompting new approaches to process the extensive data in these collections. Automatic speech recognition and natural language processing create unique possibilities for analysis of oral history (OH) interviews, where otherwise the transcription and analysis of the full recording would be too time consuming. However, many oral historians note the loss of aural information when converting the speech into text, pointing out the relevance of subjective cues for a full understanding of the interviewee narrative. In this article, we explore various computational technologies for social signal processing and their potential application space in OH archives, as well as neighboring domains where qualitative studies is a frequently used method. We also highlight the latest developments in key technologies for multimedia archiving practices such as natural language processing and automatic speech recognition. We discuss the analysis of both visual (body language and facial expressions), and non-visual cues (paralinguistics, breathing, and heart rate), stating the specific challenges introduced by the characteristics of OH collections. We argue that applying social signal processing to OH archives will have a wider influence than solely OH practices, bringing benefits for various fields from humanities to computer sciences, as well as to archival sciences. Looking at human emotions and somatic reactions on extensive interview collections would give scholars from multiple fields the opportunity to focus on feelings, mood, culture, and subjective experiences expressed in these interviews on a larger scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.