Assessing the well-being of an animal is hindered by the limitations of efficient communication between humans and animals. Instead of direct communication, a variety of parameters are employed to evaluate the well-being of an animal. Especially in the field of biomedical research, scientifically sound tools to assess pain, suffering, and distress for experimental animals are highly demanded due to ethical and legal reasons. For mice, the most commonly used laboratory animals, a valuable tool is the Mouse Grimace Scale (MGS), a coding system for facial expressions of pain in mice. We aim to develop a fully automated system for the surveillance of post-surgical and post-anesthetic effects in mice. Our work introduces a semi-automated pipeline as a first step towards this goal. A new data set of images of black-furred laboratory mice that were moving freely is used and provided. Images were obtained after anesthesia (with isoflurane or ketamine/xylazine combination) and surgery (castration). We deploy two pre-trained state of the art deep convolutional neural network (CNN) architectures (ResNet50 and InceptionV3) and compare to a third CNN architecture without pre-training. Depending on the particular treatment, we achieve an accuracy of up to 99% for the recognition of the absence or presence of post-surgical and/or post-anesthetic effects on the facial expression.
Most of the world’s 1500 active volcanoes are not instrumentally monitored, resulting in deadly eruptions which can occur without observation of precursory activity. The new Sentinel missions are now providing freely available imagery with unprecedented spatial and temporal resolutions, with payloads allowing for a comprehensive monitoring of volcanic hazards. We here present the volcano monitoring platform MOUNTS (Monitoring Unrest from Space), which aims for global monitoring, using multisensor satellite-based imagery (Sentinel-1 Synthetic Aperture Radar SAR, Sentinel-2 Short-Wave InfraRed SWIR, Sentinel-5P TROPOMI), ground-based seismic data (GEOFON and USGS global earthquake catalogues), and artificial intelligence (AI) to assist monitoring tasks. It provides near-real-time access to surface deformation, heat anomalies, SO2 gas emissions, and local seismicity at a number of volcanoes around the globe, providing support to both scientific and operational communities for volcanic risk assessment. Results are visualized on an open-access website where both geocoded images and time series of relevant parameters are provided, allowing for a comprehensive understanding of the temporal evolution of volcanic activity and eruptive products. We further demonstrate that AI can play a key role in such monitoring frameworks. Here we design and train a Convolutional Neural Network (CNN) on synthetically generated interferograms, to operationally detect strong deformation (e.g., related to dyke intrusions), in the real interferograms produced by MOUNTS. The utility of this interdisciplinary approach is illustrated through a number of recent eruptions (Erta Ale 2017, Fuego 2018, Kilauea 2018, Anak Krakatau 2018, Ambrym 2018, and Piton de la Fournaise 2018–2019). We show how exploiting multiple sensors allows for assessment of a variety of volcanic processes in various climatic settings, ranging from subsurface magma intrusion, to surface eruptive deposit emplacement, pre/syn-eruptive morphological changes, and gas propagation into the atmosphere. The data processed by MOUNTS is providing insights into eruptive precursors and eruptive dynamics of these volcanoes, and is sharpening our understanding of how the integration of multiparametric datasets can help better monitor volcanic hazards.
This paper presents a comprehensive review of the principle and application of deep learning in retinal image analysis. Many eye diseases often lead to blindness in the absence of proper clinical diagnosis and medical treatment. For example, diabetic retinopathy (DR) is one such disease in which the retinal blood vessels of human eyes are damaged. The ophthalmologists diagnose DR based on their professional knowledge, that is labor intensive. With the advances in image processing and artificial intelligence, computer vision-based techniques have been applied rapidly and widely in the field of medical images analysis and are becoming a better way to advance ophthalmology in practice. Such approaches utilize accurate visual analysis to identify the abnormality of blood vessels with improved performance over manual procedures. More recently, machine learning, in particular, deep learning, has been successfully implemented in this area. In this paper, we focus on recent advances in deep learning methods for retinal image analysis. We review the related publications since 1982, which include more than 80 papers for retinal vessels detections in the research scope spanning from segmentation to classification. Although deep learning has been successfully implemented in other areas, we found only 17 papers so far focus on retinal blood vessel segmentation. This paper characterizes each deep learning based segmentation method as described in the literature. Analyzing along with the limitations and advantages of each method. In the end, we offer some recommendations for future improvement for retinal image analysis. INDEX TERMS Retinal colour fundus images, convolutional neural networks, retinal vessels segmentation.
ABSTRACT:The extraction and description of keypoints as salient image parts has a long tradition within processing and analysis of 2D images. Nowadays, 3D data gains more and more importance. This paper discusses the benefits and limitations of keypoints for the task of fusing multiple 3D point clouds. For this goal, several combinations of 3D keypoint detectors and descriptors are tested. The experiments are based on 3D scenes with varying properties, including 3D scanner data as well as Kinect point clouds. The obtained results indicate that the specific method to extract and describe keypoints in 3D data has to be carefully chosen. In many cases the accuracy suffers from a too strong reduction of the available points to keypoints.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.