In comparison to the competitors, engineers must provide quick, low-cost, and dependable solutions. The advancement of intelligence generated by machines and its application in almost every field has created a need to reduce the human role in image processing while also making time and labor profit. Lepidopterology is the discipline of entomology dedicated to the scientific analysis of caterpillars and the three butterfly superfamilies. Students studying lepidopterology must generally capture butterflies with nets and dissect them to discover the insect’s family types and shape. This research work aims to assist science students in correctly recognizing butterflies without harming the insects during their analysis. This paper discusses transfer-learning-based neural network models to identify butterfly species. The datasets are collected from the Kaggle website, which contains 10,035 images of 75 different species of butterflies. From the available dataset, 15 unusual species were selected, including various butterfly orientations, photography angles, butterfly lengths, occlusion, and backdrop complexity. When we analyzed the dataset, we found an imbalanced class distribution among the 15 identified classes, leading to overfitting. The proposed system performs data augmentation to prevent data scarcity and reduce overfitting. The augmented dataset is also used to improve the accuracy of the data models. This research work utilizes transfer learning based on various convolutional neural network architectures such as VGG16, VGG19, MobileNet, Xception, ResNet50, and InceptionV3 to classify the butterfly species into various categories. All the proposed models are evaluated using precision, recall, F-Measure, and accuracy. The investigation findings reveal that the InceptionV3 architecture provides an accuracy of 94.66%, superior to all other architectures.
Alzheimer’s disease (AD) is a neurodegenerative disease that affects a large number of people across the globe. Even though AD is one of the most commonly seen brain disorders, it is difficult to detect and it requires a categorical representation of features to differentiate similar patterns. Research into more complex problems, such as AD detection, frequently employs neural networks. Those approaches are regarded as well-understood and even sufficient by researchers and scientists without formal training in artificial intelligence. Thus, it is imperative to identify a method of detection that is fully automated and user-friendly to non-AI experts. The method should find efficient values for models’ design parameters promptly to simplify the neural network design process and subsequently democratize artificial intelligence. Further, multi-modal medical image fusion has richer modal features and a superior ability to represent information. A fusion image is formed by integrating relevant and complementary information from multiple input images to facilitate more accurate diagnosis and better treatment. This study presents a MultiAz-Net as a novel optimized ensemble-based deep neural network learning model that incorporate heterogeneous information from PET and MRI images to diagnose Alzheimer’s disease. Based on features extracted from the fused data, we propose an automated procedure for predicting the onset of AD at an early stage. Three steps are involved in the proposed architecture: image fusion, feature extraction, and classification. Additionally, the Multi-Objective Grasshopper Optimization Algorithm (MOGOA) is presented as a multi-objective optimization algorithm to optimize the layers of the MultiAz-Net. The desired objective functions are imposed to achieve this, and the design parameters are searched for corresponding values. The proposed deep ensemble model has been tested to perform four Alzheimer’s disease categorization tasks, three binary categorizations, and one multi-class categorization task by utilizing the publicly available Alzheimer neuroimaging dataset. The proposed method achieved (92.3 ± 5.45)% accuracy for the multi-class-classification task, significantly better than other network models that have been reported.
The World Health Organization (WHO) predicted that 10 million people would have died of cancer by 2020. According to recent studies, liver cancer is the most prevalent cancer worldwide. Hepatocellular carcinoma (HCC) is the leading cause of early-stage liver cancer. However, HCC occurs most frequently in patients with chronic liver conditions (such as cirrhosis). Therefore, it is important to predict liver cancer more explicitly by using machine learning. This study examines the survival prediction of a dataset of HCC based on three strategies. Originally, missing values are estimated using mean, mode, and k-Nearest Neighbor (k-NN). We then compare the different select features using the wrapper and embedded methods. The embedded method employs Least Absolute Shrinkage and Selection Operator (LASSO) and ridge regression in conjunction with Logistic Regression (LR). In the wrapper method, gradient boosting and random forests eliminate features recursively. Classification algorithms for predicting results include k-NN, Random Forest (RF), and Logistic Regression. The experimental results indicate that Recursive Feature Elimination with Gradient Boosting (RFE-GB) produces better results, with a 96.66% accuracy rate and a 95.66% F1-score.
Alzheimer’s disease (AD) is a neurological disease that affects numerous people. The condition causes brain atrophy, which leads to memory loss, cognitive impairment, and death. In its early stages, Alzheimer’s disease is tricky to predict. Therefore, treatment provided at an early stage of AD is more effective and causes less damage than treatment at a later stage. Although AD is a common brain condition, it is difficult to recognize, and its classification requires a discriminative feature representation to separate similar brain patterns. Multimodal neuroimage information that combines multiple medical images can classify and diagnose AD more accurately and comprehensively. Magnetic resonance imaging (MRI) has been used for decades to assist physicians in diagnosing Alzheimer’s disease. Deep models have detected AD with high accuracy in computing-assisted imaging and diagnosis by minimizing the need for hand-crafted feature extraction from MRI images. This study proposes a multimodal image fusion method to fuse MRI neuroimages with a modular set of image preprocessing procedures to automatically fuse and convert Alzheimer’s disease neuroimaging initiative (ADNI) into the BIDS standard for classifying different MRI data of Alzheimer’s subjects from normal controls. Furthermore, a 3D convolutional neural network is used to learn generic features by capturing AlD biomarkers in the fused images, resulting in richer multimodal feature information. Finally, a conventional CNN with three classifiers, including Softmax, SVM, and RF, forecasts and classifies the extracted Alzheimer’s brain multimodal traits from a normal healthy brain. The findings reveal that the proposed method can efficiently predict AD progression by combining high-dimensional MRI characteristics from different public sources with an accuracy range from 88.7% to 99% and outperforming baseline models when applied to MRI-derived voxel features.
The various corn diseases that affect agriculture go unnoticed by farmers. Each day, more crops fail due to diseases as there is no effective treatment or a way to identify the illness. Common rust, blight, and the northern leaf grey spot are the most prevalent corn diseases. The presence of a disease cannot be accurately detected by simply looking at the plant. This will lead to improper pesticide use, which harms people by bringing on chronic diseases. Therefore, maintaining food security depends on accurate and automatic disease detection. It might be possible to save time and stop crop degradation before it takes place by utilising digital technologies. Hence, applying modern digital technologies to identify the disease in the damaged corn fields automatically will be more advantageous to the farmers. Many academics have recently become interested in deep learning, which has aided in creating an exact and autonomous picture classification scheme. The use of deep learning techniques and their adjustments for detecting corn illnesses can greatly assist contemporary agriculture. To find plant leaf diseases, we employ image acquisition, preprocessing, and classification processes. Preprocessing includes procedures such as reading images, resizing images, and data augmentation. The suggested project is based on EfficientNet and improves the precision of the database of corn leaf diseases by tweaking the variables. Tests are run using DenseNet and Resnet on the test dataset to confirm the precision and robustness of this approach. The recognition accuracy of 98.85% that can be achieved using this method, according to experimental results, is significantly higher than those of other cutting-edge techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.