Assessing the well-being of an animal is hindered by the limitations of efficient communication between humans and animals. Instead of direct communication, a variety of parameters are employed to evaluate the well-being of an animal. Especially in the field of biomedical research, scientifically sound tools to assess pain, suffering, and distress for experimental animals are highly demanded due to ethical and legal reasons. For mice, the most commonly used laboratory animals, a valuable tool is the Mouse Grimace Scale (MGS), a coding system for facial expressions of pain in mice. We aim to develop a fully automated system for the surveillance of post-surgical and post-anesthetic effects in mice. Our work introduces a semi-automated pipeline as a first step towards this goal. A new data set of images of black-furred laboratory mice that were moving freely is used and provided. Images were obtained after anesthesia (with isoflurane or ketamine/xylazine combination) and surgery (castration). We deploy two pre-trained state of the art deep convolutional neural network (CNN) architectures (ResNet50 and InceptionV3) and compare to a third CNN architecture without pre-training. Depending on the particular treatment, we achieve an accuracy of up to 99% for the recognition of the absence or presence of post-surgical and/or post-anesthetic effects on the facial expression.
Assessing the well-being of an animal is hindered by the limitations of efficient communication between humans and animals. Instead of direct communication, a variety of behavioral, biochemical, physiological, and physical parameters are employed to evaluate the well-being of an animal. Especially in the field of biomedical research, scientifically sound tools to assess pain, suffering, and distress for experimental animals are highly demanded due to ethical and legal reasons. For mice, the most commonly used laboratory animals, a valuable tool is the Mouse Grimace Scale (MGS), a coding system for facial expressions of pain in mice which has been shown to be accurate and reliable. Currently, MGS scoring is very time and effort consuming as it is manually performed by humans being thoroughly trained in using this method. Therefore, we aim to develop a fully automated system for the surveillance of well-being in mice.March 14, 2019 1/30Our work introduces a semi-automated pipeline as a first step towards this goal. We use and provide a new data set of images of black-furred laboratory mice that were moving freely, thus the images contain natural variation with regard to perspective and background. The analysis of this data set is therefore more challenging but reflects realistic conditions as it would be obtainable without human intervention. Images were obtained after anesthesia (with isoflurane or ketamine/xylazine combination) and surgery (castration). We deploy two pre-trained state of the art deep convolutional neural network (CNN) architectures (ResNet50 and InceptionV3) andcompare to a third CNN architecture without pre-training. Depending on the particular treatment, we achieve an accuracy of up to 99% for binary "pain"/"no-pain" classification. Author summaryIn the field of animal research, it is crucial to assess the well-being of an animal. For mice, the most commonly used laboratory animals, there is a variety of indicators for well-being.Especially the facial expression of a mouse can give us important information on its well-being state. However, currently the surveillance of well-being can only be ensured if a human is present. Therefore, we developed a first approach towards a fully automated surveillance of the well-being status of a mouse. We trained neural networks on face images of black-furred mice, which were either untreated or underwent anesthesia or surgery, to distinguish between an impaired and unimpaired well-being state. Our systems successfully learnt to assess whether the well-being of a mouse was impaired and, depending on the particular treatment, its decision was correct in up to 99%. A tool that visualizes the features used for the decision making process indicated that the decision was mainly based on the facial expressions of a mouse.
Face images are subject to many different factors of variation, especially in unconstrained in-the-wild scenarios. For most tasks involving such images, e.g. expression recognition from video streams, having enough labeled data is prohibitively expensive. One common strategy to tackle such a problem is to learn disentangled representations for the different factors of variation of the observed data using adversarial learning. In this paper, we use a formulation of the adversarial loss to learn disentangled representations for face images. The used model facilitates learning on single-task datasets and improves the state-of-the-art in expression recognition with an accuracy of 60.53% on the AffectNet dataset, without using any additional data.
Volcanic sulfur dioxide (SO2) satellite observations are key for monitoring volcanic activity, and for mitigation of the associated risks on both human health and aviation safety. Automatic analysis of this data source, including robust source emission retrieval, is in turn essential for near real-time monitoring applications. We have developed fast and accurate SO2 plume classifier and segmentation algorithms using classic clustering, segmentation and image processing techniques. These algorithms, applied to measurements from the TROPOMI instrument onboard the Sentinel-5 Precursor platform, can help in the accurate source estimation of volcanic SO2 plumes originating from various volcanoes. In this paper, we demonstrate the ability of different pixel classification methodologies to retrieve SO2 source emission with a good accuracy. We compare the algorithms, their strengths and shortcomings, and present plume classification results for various active volcanoes throughout the year 2021, including examples from Etna (Italy), Sangay and Reventador (Ecuador), Sabancaya and Ubinas (Peru), Scheveluch and Klyuchevskoy (Russia), as well as Ibu and Dukono (Indonesia). The developed algorithms, shared as open-source code, contribute to improving analysis and monitoring of volcanic emissions from space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.