Automatic detection and classification of the masses in mammograms are still a big challenge and play a crucial role to assist radiologists for accurate diagnosis. In this paper, we propose a novel computer-aided diagnose (CAD) system based on one of the regional deep learning techniques: a ROI-based Convolutional Neural Network (CNN) which is called You Only Look Once (YOLO). Our proposed YOLO-based CAD system contains four main stages: mammograms preprocessing, feature extraction utilizing multi convolutional deep layers, mass detection with confidence model, and finally mass classification using fully connected neural network (FC-NN). A set of training mammograms with the information of ROI masses and their types are used to train YOLO. The trained YOLO-based CAD system detects the masses and classifies their types into benign or malignant. Our results show that the proposed YOLO-based CAD system detects the mass location with an overall accuracy of 96.33%. The system also distinguishes between benign and malignant lesions with an overall accuracy of 85.52%. Our proposed system seems to be feasible as a CAD system capable of detection and classification at the same time. It also overcomes some challenging breast cancer cases such as the mass existing in the pectoral muscles or dense regions.
Recognition of hand activities could provide new information towards daily human activity logging and gesture interface applications. However, there is a technical challenge due to delicate hand motions and complex movement contexts. In this work, we proposed hand activity recognition (HAR) based on a single inertial measurement unit (IMU) sensor at one wrist via deep learning recurrent neural network. The proposed HAR works directly with signals from a tri-axial accelerometer, gyroscope, and magnetometer sensors within one IMU. We evaluated the performance of our HAR with a public human hand activity database for six hand activities including Open Door, Close Door, Open Fridge, Close Fridge, Clean Table and Drink from Cup. Our results show an overall recognition accuracy of 80.09% with discrete standard epochs and 74.92% with noise-added epochs. With continuous time series epochs, the accuracy of 71.75% was obtained.
Dual high and low energy images of Dual Energy X-ray Absorptiometry (DEXA) suffer from noises due to the use of weak amount of X-rays. Denoising these DEXA images could be a key process to enhance and improve a Bone Mineral Density (BMD) map which is derived from a pair of high and low energy images. This could further improve the accuracy of diagnosis of bone fractures, osteoporosis, and etc. In this paper, we present a denoising technique for dual high and low energy images of DEXA via non-local means filter (NLMF). The noise of dual DEXA images is modeled based on both source and detector noises of a DEXA system. Then, the parameters of the proposed NLMF are optimized for denoising utilizing the experimental data from uniform phantoms. The optimized NLMF is tested and verified with the DEXA images of the uniform phantoms and real human spine. The quantitative evaluation shows the improvement of Signal-to-Noise Ratio (SNR) for the high and low phantom images on the order of 30.36% and 27.02% and for the high and low real spine images on the order of 22.28% and 33.43%, respectively. Our work suggests that denoising via NLMF could be a key preprocessing process for clinical DEXA imaging.
Recognition of hand activities of daily living (hand-ADL) is useful in the areas of human–computer interactions, lifelogging, and healthcare applications. However, developing a reliable human activity recognition (HAR) system for hand-ADL with only a single wearable sensor is still a challenge due to hand movements that are typically transient and sporadic. Approaches based on deep learning methodologies to reduce noise and extract relevant features directly from raw data are becoming more promising for implementing such HAR systems. In this work, we present an ARMA-based deep autoencoder and a deep recurrent network (RNN) using Gated Recurrent Unit (GRU) for recognition of hand-ADL using signals from a single IMU wearable sensor. The integrated ARMA-based autoencoder denoises raw time-series signals of hand activities, such that better representation of human hand activities can be made. Then, our deep RNN-GRU recognizes seven hand-ADL based upon the output of the autoencoder: namely, Open Door, Close Door, Open Refrigerator, Close Refrigerator, Open Drawer, Close Drawer, and Drink from Cup. The proposed methodology using RNN-GRU with autoencoder achieves a mean accuracy of 84.94% and F1-score of 83.05% outperforming conventional classifiers such as RNN-LSTM, BRNN-LSTM, CNN, and Hybrid-RNNs by 4–10% higher in both accuracy and F1-score. The experimental results also showed the use of the autoencoder improves both the accuracy and F1-score of each conventional classifier by 12.8% in RNN-LSTM, 4.37% in BRNN-LSTM, 15.45% CNN, 14.6% Hybrid RNN, and 12.4% for the proposed RNN-GRU.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.