We propose a generalized focal loss function based on the Tversky index to address the issue of data imbalance in medical image segmentation. Compared to the commonly used Dice loss, our loss function achieves a better trade off between precision and recall when training on small structures such as lesions. To evaluate our loss function, we improve the attention U-Net model by incorporating an image pyramid to preserve contextual features. We experiment on the BUS 2017 dataset and ISIC 2018 dataset where lesions occupy 4.84% and 21.4% of the images area and improve segmentation accuracy when compared to the standard U-Net by 25.7% and 3.6%, respectively.
Detection of Alzheimer's Disease (AD) from neuroimaging data such as MRI through machine learning have been a subject of intense research in recent years. Recent success of deep learning in computer vision have progressed such research further. However, common limitations with such algorithms are reliance on a large number of training images, and requirement of careful optimization of the architecture of deep networks. In this paper, we attempt solving these issues with transfer learning, where state-of-the-art architectures such as VGG and Inception are initialized with pre-trained weights from large benchmark datasets consisting of natural images, and the fully-connected layer is re-trained with only a small number of MRI images. We employ image entropy to select the most informative slices for training. Through experimentation on the OASIS MRI dataset, we show that with training size almost 10 times smaller than the state-of-the-art, we reach comparable or even better performance than current deep-learning based methods.
Detection of Alzheimer's disease (AD) from neuroimaging data such as MRI through machine learning has been a subject of intense research in recent years. The recent success of deep learning in computer vision has progressed such research. However, common limitations with such algorithms are reliance on a large number of training images, and the requirement of careful optimization of the architecture of deep networks. In this paper, we attempt solving these issues with transfer learning, where the state-ofthe-art VGG architecture is initialized with pre-trained weights from large benchmark datasets consisting of natural images. The network is then fine-tuned with layer-wise tuning, where only a pre-defined group of layers are trained on MRI images. To shrink the training data size, we employ image entropy to select the most informative slices. Through experimentation on the ADNI dataset, we show that with the training size of 10 to 20 times smaller than the other contemporary methods, we reach the state-of-the-art performance in AD vs. NC, AD vs. MCI, and MCI vs. NC classification problems, with a 4% and a 7% increase in accuracy over the state-of-the-art for AD vs. MCI and MCI vs. NC, respectively. We also provide a detailed analysis of the effect of the intelligent training data selection method, changing the training size, and changing the number of layers to be fine-tuned. Finally, we provide class activation maps (CAM) that demonstrate how the proposed model focuses on discriminative image regions that are neuropathologically relevant and can help the healthcare practitioner in interpreting the model's decision-making process. INDEX TERMS Deep learning, transfer learning, convolutional neural network, Alzheimer's.
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
Heart rate variability (HRV) is the rate of variability between each heartbeat with respect to time. It is used to analyse the Autonomic Nervous System (ANS), a control system used to modulate the body's unconscious action such as cardiac function, respiration, digestion, blood pressure, urination, and dilation/constriction of the pupil. This review article presents a summary and analysis of various research works that analyzed HRV associated with morbidity, pain, drowsiness, stress and exercise through signal processing and machine learning methods. The points of emphasis with regards to HRV research as well as the gaps associated with processes which can be improved to enhance the quality of the research have been discussed meticulously. Restricting the physiological signals to Electrocardiogram (ECG), Electrodermal activity (EDA), photoplethysmography (PPG), and respiration (RESP) analysis resulted in 25 articles which examined the cause and effect of increased/reduced HRV. Reduced HRV was generally associated with increased morbidity and stress. High HRV normally indicated good health, and in some instances, it could signify clinical events of interest such as drowsiness. Effective analysis of HRV during ambulatory and motion situations such as exercise, video gaming, and driving could have a significant impact toward improving social well-being. Detection of HRV in motion is far from perfect, situations involving exercise or driving reported accuracy as high as 85% and as low as 59%. HRV detection in motion can be improved further by harnessing the advancements in machine learning techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.