Plant species recognition from visual data has always been a challenging task for Artificial Intelligence (AI) researchers, due to a number of complications in the task, such as the enormous data to be processed due to vast number of floral species. There are many sources from a plant that can be used as feature aspects for an AI-based model, but features related to parts like leaves are considered as more significant for the task, primarily due to easy accessibility, than other parts like flowers, stems, etc. With this notion, we propose a plant species recognition model based on morphological features extracted from corresponding leaves’ images using the support vector machine (SVM) with adaptive boosting technique. This proposed framework includes the pre-processing, extraction of features and classification into one of the species. Various morphological features like centroid, major axis length, minor axis length, solidity, perimeter, and orientation are extracted from the digital images of various categories of leaves. In addition to this, transfer learning, as suggested by some previous studies, has also been used in the feature extraction process. Various classifiers like the kNN, decision trees, and multilayer perceptron (with and without AdaBoost) are employed on the opensource dataset, FLAVIA, to certify our study in its robustness, in contrast to other classifier frameworks. With this, our study also signifies the additional advantage of 10-fold cross validation over other dataset partitioning strategies, thereby achieving a precision rate of 95.85%.
Like the Covid-19 pandemic, smallpox virus infection broke out in the last century, wherein 500 million deaths were reported along with enormous economic loss. But unlike smallpox, the Covid-19 recorded a low exponential infection rate and mortality rate due to advancement in medical aid and diagnostics. Data analytics, machine learning, and automation techniques can help in early diagnostics and supporting treatments of many reported patients. This paper proposes a robust and efficient methodology for the early detection of COVID-19 from Chest X-Ray scans utilizing enhanced deep learning techniques. Our study suggests that using the Prediction and Deconvolutional Modules in combination with the SSD architecture can improve the performance of the model trained at this task. We used a publicly open CXR image dataset and implemented the detection model with task-specific pre-processing and near 80:20 split. This achieved a competitive specificity of 0.9474 and a sensibility/accuracy of 0.9597, which shall help better decision-making for various aspects of identification and treat the infection.
The novel-corona-virus is presently accountable for 547,782 deaths worldwide. It was first observed in China in late 2019 and, the increase in number of its affected cases seriously disturbed almost every nation in terms of its economical, structural, educational growth. Furthermore, with the advancement of data-analytics and machine learning towards enhanced diagnostic tools for the infection, the growth rate in the affected patients has reduced considerably, thereby making it critical for AI researchers and experts from medical radiology to put more efforts in this side. In this regard, we present a controlled study which provides analysis of various potential possibilities in terms of detection models/algorithms for COVID-19 detection from radiology-based images like chest x-rays. We provide a rigorous comparison between the VGG16, VGG19, Residual Network, Dark-Net as the foundational network with the Single Shot MultiBox Detector (SSD) for predictions. With some preprocessing techniques specific to the task like CLAHE, this study shows the potential of the methodology relative to the existing techniques. The highest of all precision and recall were achieved with DenseNet201 + SSD512 as 93.01 and 94.98 respectively.
The task of predicting the risk of defaulting of a lender using tools in the domain of AI is an emerging one and in growing demand, given the revolutionary potential of AI. Various attributes like income, properties acquired, educational status, and many other socioeconomic factors can be used to train a model to predict the possibilities of nonrepayment of a loan or its chances. Most of the techniques and algorithms used in this regard previously do not submit any attention to the uncertainty in predictions for out of distribution (OOD) in a dataset, which contributes to overfitting, leading to relatively lower accuracy for predicting these data points. Specifically, for credit risk classification, this is a serious concern, given the structure of the available datasets and the trend they follow. With a focus on this issue, we propose a robust and better methodology that uses a recent and efficient family of nonlinear neural network activation functions, which mimics the properties induced by the widely‐used Matérn family of kernels in Gaussian process (GP) models. We tested the classification performance metrics on three openly available datasets after prior preprocessing. We achieved a high mean classification accuracy of 87.4% and a lower mean negative log predictive density loss of 0.405.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.