2019
DOI: 10.22266/ijies2019.1031.28
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection Effects on Gradient Descent Logistic Regression for Medical Data Classification

Abstract: In recent years, a number of researchers have concentrated on medical data analytics because machine intelligence in medical diagnosis is a new trend for enormous medical applications. Generally, medical datasets are massive in size, so traditional classifiers suffered from overfitting and under-fitting problem of training set. In this paper, Gradient Descent Logistic Regression (GDLR) classification method is proposed for medical data classification. The Pearson Correlation Coefficient (PCC) is used to calcul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…In generating a random data CUDA programming is very fast, i.e, the time taken for generating one lakh of data has took 0.05 second.In the second phase we have to fix the number of attributes, type of the attributes and label of the attributes. The third phase of the implementation the attributes are represented in the form of a decision tree [21,22]. Here decision tree is a binary tree, because we classify only completed data sets.In the fourth phase sorting the generated values and determining the split point for each numerical attribute is done.The fifth phase of implementation includes in modeling the decision tree-based classification rules which are used in GPU programming.…”
Section: Implementation and Resultsmentioning
confidence: 99%
“…In generating a random data CUDA programming is very fast, i.e, the time taken for generating one lakh of data has took 0.05 second.In the second phase we have to fix the number of attributes, type of the attributes and label of the attributes. The third phase of the implementation the attributes are represented in the form of a decision tree [21,22]. Here decision tree is a binary tree, because we classify only completed data sets.In the fourth phase sorting the generated values and determining the split point for each numerical attribute is done.The fifth phase of implementation includes in modeling the decision tree-based classification rules which are used in GPU programming.…”
Section: Implementation and Resultsmentioning
confidence: 99%
“…Let us use a simple example to make it easier to understand. Suppose we have a model which has to predict whether a review given by a person is positive or negative [ 30 ]. About COVID-19 (completely fictitious), based on the review, we calculate the polarity of that review by considering the coefficients of b0 = −0.05 and b1 = 0.05.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…The third and important approach is the usage of GAN models, an unsupervised technique that can generate new images by learning from the patterns that exist in between the input image. GAN's consists of two components: generator to create new images by training the model and discriminator for classification purpose [12,13]. In GAN's, the generator takes input as a fixed-length vector, which is known as "Noise Vector" for producing the salt and pepper noise images because most of the plants have smoked layer above them.…”
Section: Introductionmentioning
confidence: 99%