Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Depending on network capacity and task/dataset difficulty, we surpass 85% sensitivity and 80% specificity as recommended by the NHS when referring 0−20% of the most uncertain decisions for further inspection. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust.
Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Depending on network capacity and task/dataset difficulty, we surpass 85% sensitivity and 80% specificity as recommended by the NHS when referring 0−20% of the most uncertain decisions for further inspection. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust.In recent years, deep neural networks (DNNs) 1 have revolutionized computer vision 2 and gained considerable traction in challenging scientific data analysis problems 3 . By stacking layers of linear convolutions with appropriate non-linearities 4 , abstract concepts can be learnt from high-dimensional input alleviating the challenging and time-consuming task of hand-crafting algorithms. Such DNNs are quickly entering the field of medical imaging and diagnosis [5][6][7][8][9][10][11][12][13][14][15] , outperforming state-of-the-art methods at disease detection or allowing one to tackle problems that had previously been out of reach. Applied at scale, such systems could considerably alleviate the workload of physicians by detecting patients at risk from a prescreening examination. Surprisingly, however, DNN-based solutions for medical applications have so far been suggested without any risk-management. Yet, information about the reliability of automated decisions is a key requirement for them to be integrated into diagnostic systems in the healthcare sector 16 . No matter whether data is short or abundant, difficult diagnostic cases are unavoidable. Therefore, DNNs should report -in addition to the decision -an associated estimate of uncertainty 17 , in particular since some images may be more difficult to analyse and classify than others, both for the clinician and the model, and the transition from "healthy" to "diseased" is not always clear-cut.Automated systems are typically evaluated by their diagnostic sensitivity, specificity or area under receiver-operating-characteristic (ROC) curve, metrics which measure the overall performance on the test set. However, ...
Acoustic-trawl surveys are an important tool for marine stock management and environmental monitoring of marine life. Correctly assigning the acoustic signal to species or species groups is a challenge, and recently trawl camera systems have been developed to support interpretation of acoustic data. Examining images from known positions in the trawl track provides high resolution ground truth for the presence of species. Here, we develop and deploy a deep learning neural network to automate the classification of species present in images from the Deep Vision trawl camera system. To remedy the scarcity of training data, we developed a novel training regime based on realistic simulation of Deep Vision images. We achieved a classification accuracy of 94% for blue whiting, Atlantic herring, and Atlantic mackerel, showing that automatic species classification is a viable and efficient approach, and further that using synthetic data can effectively mitigate the all too common lack of training data.
The age structure of a fish population has important implications for recruitment processes and population fluctuations, and is a key input to fisheries-assessment models. The current method of determining age structure relies on manually reading age from otoliths, and the process is labor intensive and dependent on specialist expertise. Recent advances in machine learning have provided methods that have been remarkably successful in a variety of settings, with potential to automate analysis that previously required manual curation. Machine learning models have previously been successfully applied to object recognition and similar image analysis tasks. Here we investigate whether deep learning models can also be used for estimating the age of otoliths from images. We adapt a pre-trained convolutional neural network designed for object recognition, to estimate the age of fish from otolith images. The model is trained and validated on a large collection of images of Greenland halibut otoliths. We show that the model works well, and that its precision is comparable to documented precision obtained by human experts. Automating this analysis may help to improve consistency, lower cost, and increase the extent of age estimation. Given that adequate data are available, this method could also be used to estimate age of other species using images of otoliths or fish scales.
The age structure of a fish population has important implications for recruitment processes and population fluctuations, and is key input to fisheries assessment models.The current method relies on manually reading age from otoliths, and the process is labor intensive and dependent on specialist expertise.Advances in machine learning have recently brought forth methods that have been remarkably successful in a variety of settings, with potential to automate analysis that previously required manual curation. Machine learning models have previously been successfully applied to object recognition and similar image analysis tasks. Here we investigate whether deep learning models can also be used for estimating the age of otoliths from images.We adapt a standard neural network model designed for object recognition to the task of estimating age from otolith images. The model is trained and validated on a large collection of images of Greenland halibut otoliths. We show that the model works well, and that its precision is comparable to and may even surpass that of human experts.Automating this analysis will help to improve consistency, lower cost, and increase scale of age prediction. Similar approaches can likely be used for otoliths from other September 6, 2018 1/14 species as well as for reading fish scales. The method is therefore an important step forward for improving the age structure estimates of fish populations.Age of fish is a key parameter in age-structured fisheries-assessment models. Age is 2 usually considered as a discrete parameter (age group) that identifies the individual 3year classes, i.e. those originating from the spawning activity in a given year (1). By 4 definition, an individual is categorized as age group 0 from early larval stage and until 5 January 1st, when all age groups increase the age by one. The assessment models 6 typically express the dynamics of the individual year class from the age when they 7 recruit, through sexual maturation, reproduction, and throughout their longevity (2). 8The models are fitted to data both from the commercial catches and fishery 9 independent surveys, and a sampling program for a fish stock typically involves 10 sampling throughout the year and across several different types of fishing gears. 11The age needs to be established from the individual fish from the sampling 12 programs. Since fish growth and, hence, age at length varies in time and space (e.g., 3) 13 linked to environmental factors like temperature or availability of food, morphology 14 (e.g., fish length) cannot be reliably used as a proxy for age. Instead, the age is 15 determined from a subset of individuals and usually used in conjunction with length 16 data and information about time and location of sampling (3). The age is "read" from 17 the annual zones in otoliths or scales. Although simple in principle, age reading 18 depends on correct identification of zonation patterns that may consist of both true 19 annual zones and zones representing other (unknown) temporal variation (1; 4) The 2...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.