Melanoma is considered the most serious type of skin cancer. All over the world, the mortality rate is much high for melanoma in contrast with other cancer. There are various computer-aided solutions proposed to correctly identify melanoma cancer. However, the difficult visual appearance of the nevus makes it very difficult to design a reliable Computer-Aided Diagnosis (CAD) system for accurate melanoma detection. Existing systems either uses traditional machine learning models and focus on handpicked suitable features or uses deep learning-based methods that use complete images for feature learning. The automatic and most discriminative feature extraction for skin cancer remains an important research problem that can further be used to better deep learning training. Furthermore, the availability of the limited available images also creates a problem for deep learning models. From this line of research, we propose an intelligent Region of Interest (ROI) based system to identify and discriminate melanoma with nevus cancer by using the transfer learning approach. An improved k-mean algorithm is used to extract ROIs from the images. These ROI based approach helps to identify discriminative features as the images containing only melanoma cells are used to train system. We further use a Convolutional Neural Network (CNN) based transfer learning model with data augmentation for ROI images of DermIS and DermQuest datasets. The proposed system gives 97.9% and 97.4% accuracy for DermIS and DermQuest respectively. The proposed ROI based transfer learning approach outperforms existing methods that use complete images for classification.
In recent years, Human Activity Recognition (HAR) has become one of the most important research topics in the domains of health and human-machine interaction. Many Artificial intelligence-based models are developed for activity recognition; however, these algorithms fail to extract spatial and temporal features due to which they show poor performance on real-world long-term HAR. Furthermore, in literature, a limited number of datasets are publicly available for physical activities recognition that contains less number of activities. Considering these limitations, we develop a hybrid model by incorporating Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) for activity recognition where CNN is used for spatial features extraction and LSTM network is utilized for learning temporal information. Additionally, a new challenging dataset is generated that is collected from 20 participants using the Kinect V2 sensor and contains 12 different classes of human physical activities. An extensive ablation study is performed over different traditional machine learning and deep learning models to obtain the optimum solution for HAR. The accuracy of 90.89% is achieved via the CNN-LSTM technique, which shows that the proposed model is suitable for HAR applications.
Alzheimer's Disease (AD) is the most common form of dementia. It gradually increases from mild stage to severe, affecting the ability to perform common daily tasks without assistance. It is a neurodegenerative illness, presently having no specified cure. Computer-Aided Diagnostic Systems have played an important role to help physicians to identify AD. However, the diagnosis of AD into its four stages; No Dementia, Very Mild Dementia, Mild Dementia, and Moderate Dementia remains an open research area. Deep learning assisted computer-aided solutions are proved to be more useful because of their high accuracy. However, the most common problem with deep learning architecture is that large training data is required. Furthermore, the samples should be evenly distributed among the classes to avoid the class imbalance problem. The publicly available dataset (OASIS) has serious class imbalance problem. In this research, we employed a transfer learning-based technique using data augmentation for 3D Magnetic Resonance Imaging (MRI) views from OASIS dataset. The accuracy of the proposed model utilizing a single view of the brain MRI is 98.41% while using 3D-views is 95.11%. The proposed system outperformed the existing techniques for Alzheimer disease stages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.