2016
DOI: 10.1007/978-3-319-46909-6_13
|View full text |Cite
|
Sign up to set email alerts
|

FRFE: Fast Recursive Feature Elimination for Credit Scoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…Typically, credit scoring databases are often large and characterised by redundant and irrelevant features [56]. Financial data and credit data in particular usually contain irrelevant and redundant features [57]. The redundancy and the deficiency in data can reduce the classification accuracy and lead to incorrect decision [58], [39].…”
Section: Existing Workmentioning
confidence: 99%
“…Typically, credit scoring databases are often large and characterised by redundant and irrelevant features [56]. Financial data and credit data in particular usually contain irrelevant and redundant features [57]. The redundancy and the deficiency in data can reduce the classification accuracy and lead to incorrect decision [58], [39].…”
Section: Existing Workmentioning
confidence: 99%
“…decision trees, SVMs, linear models). Although initially the algorithm was coupled with SVM for selecting features in high-dimensional small sample gene-expression data, it has since been used in a variety of settings with various base models [12,16,21,27].…”
Section: Related Workmentioning
confidence: 99%
“…However, due to possibly high computational costs, the authors of RFE suggest that in some scenarios it may be more efficient to remove several features at a time, at the expense of possible classification performance degradation [11]. With the purpose of speeding up RFE, Ha and Nguyen recently put forward the Fast Recursive Feature Elimination (Fast RFE) algorithm [12]. The authors propose to use Parallel Random Forest as the wrapped classifier, and, thus, save model training time.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations