Digital pathology platforms with integrated artificial intelligence have the potential to increase the efficiency of the nonclinical pathologist’s workflow through screening and prioritizing slides with lesions and highlighting areas with specific lesions for review. Herein, we describe the comparison of various single- and multi-magnification convolutional neural network (CNN) architectures to accelerate the detection of lesions in tissues. Different models were evaluated for defining performance characteristics and efficiency in accurately identifying lesions in 5 key rat organs (liver, kidney, heart, lung, and brain). Cohorts for liver and kidney were collected from TG-GATEs open-source repository, and heart, lung, and brain from internally selected R&D studies. Annotations were performed, and models were trained on each of the available lesion classes in the available organs. Various class-consolidation approaches were evaluated from generalized lesion detection to individual lesion detections. The relationship between the amount of annotated lesions and the precision/accuracy of model performance is elucidated. The utility of multi-magnification CNN implementations in specific tissue subtypes is also demonstrated. The use of these CNN-based models offers users the ability to apply generalized lesion detection to whole-slide images, with the potential to generate novel quantitative data that would not be possible with conventional image analysis techniques.
Interactive image segmentation is extensively used in photo editing when the aim is to separate a foreground object from its background so that it is available for various applications. The goal of the interaction is to get an accurate segmentation of the object with the minimal amount of human effort. To improve the usability and user experience using interactive image segmentation we present three interaction methods and study the effect of each using both objective and subjective metrics, such as, accuracy, amount of effort needed, cognitive load and preference of interaction method as voted by users. The novelty of this paper is twofold. First , the evaluation of interaction methods is carried out with objective metrics such as object and boundary accuracies in tandem with subjective metrics to cross check if they support each other. Second, we analyze Electroencephalography (EEG) data obtained from subjects performing the segmentation as an indicator of brain activity. The experimental results potentially give valuable cues for the development of easy-to-use yet efficient interaction methods for image segmentation.
In Tg-rasH2 carcinogenicity mouse models, a positive control group is treated with a carcinogen such as urethane or N-nitroso-N-methylurea to test study validity based on the presence of the expected proliferative lesions in the transgenic mice. We hypothesized that artificial intelligence–based deep learning (DL) could provide decision support for the toxicologic pathologist by screening for the proliferative changes, verifying the expected pattern for the positive control groups. Whole slide images (WSIs) of the lungs, thymus, and stomach from positive control groups were used for supervised training of a convolutional neural network (CNN). A single pathologist annotated WSIs of normal and abnormal tissue regions for training the CNN-based supervised classifier using INHAND criteria. The algorithm was evaluated using a subset of tissue regions that were not used for training and then additional tissues were evaluated blindly by 2 independent pathologists. A binary output (proliferative classes present or not) from the pathologists was compared to that of the CNN classifier. The CNN model grouped proliferative lesion positive and negative animals at high concordance with the pathologists. This process simulated a workflow for review of these studies, whereby a DL algorithm could provide decision support for the pathologists in a nonclinical study.
Rehabilitation from cardiovascular disease (CVD) usually requires lifestyle changes, especially an increase in exercise and physical activity. However uptake and adherence to exercise is low for community based programmes. We propose a mobile application that allows users to choose the type of exercise and compete it at a convenient time in the comfort of their own home. Grounded in a behaviour change framework, the application provides feedback and encouragement to continue exercising and to improve on previous results. The application also utilizes wearable wireless technologies in order to provide highly personalized feedback. The application can accurately detect if a specific exercise is being done, and count the associated number of repetitions utilizing accelerometer or gyroscope signalsMachine learning models are employed to recognize individual local muscular endurance (LME) exercises, achieving overall accuracy of more than 98%. This technology allows providing a near real-time personalized feedback which mimics the feedback that the user might expect from an instructor. This is proved to motivate users to continue the recovery process.
Abstract-In this paper, we investigate the parameters underpinning our previously presented system for detecting unusual events in surveillance applications [1]. The system identifies anomalous events using an unsupervised data-driven approach. During a training period, typical activities within a surveilled environment are modeled using multi-modal sensor readings. Significant deviations from the established model of regular activity can then be flagged as anomalous at run-time. Using this approach, the system can be deployed and automatically adapt for use in any environment without any manual adjustment. Experiments carried out on two days of audio-visual data were performed and evaluated using a manually annotated groundtruth. We investigate sensor fusion and quantitatively evaluate the performance gains over single modality models. We also investigate different formulations of our cluster-based model of usual scenes as well as the impact of dynamic thresholding on identifying anomalous events. Experimental results are promising, even when modeling is performed using very simple audio and visual features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.