2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE) 2022
DOI: 10.1109/icacite53722.2022.9823464
|View full text |Cite
|
Sign up to set email alerts
|

A Systematic Method for Breast Cancer Classification using RFE Feature Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 72 publications
(6 citation statements)
references
References 7 publications
0
6
0
Order By: Relevance
“…In this manner, the new subtrees will update the earlier residuals to lessen the cost function's error. Random forest is a set of trees trained using samples obtained from a random resampling of the training set [29,30]. Bootstrap samples are those produced by randomly resampling the training set.…”
Section: Classifiersmentioning
confidence: 99%
“…In this manner, the new subtrees will update the earlier residuals to lessen the cost function's error. Random forest is a set of trees trained using samples obtained from a random resampling of the training set [29,30]. Bootstrap samples are those produced by randomly resampling the training set.…”
Section: Classifiersmentioning
confidence: 99%
“…Performance was measured using the following evaluation matrices. The evaluation matrices have been calculated based on correctly classified positive classes called true positives, correctly classified negative classes called true negatives, incorrect classifications done for positive classes called false positives, and incorrect classifications done for negative classes called false negatives [30,31].…”
Section: Evaluation Matricesmentioning
confidence: 99%
“…ANOVA feature selection [ 32 , 33 , 34 ] is a technique used in machine learning for selecting the most essential characteristics from the dataset. The ANOVA technique involves calculating the F-value for each feature, representing the degree to which the target variable’s variance can explain that feature’s variance.…”
Section: Proposed Deep Feature Selection Based Approachmentioning
confidence: 99%