2019 16th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technolo 2019
DOI: 10.1109/ecti-con47248.2019.8955366
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Analysis using Feature Selection and Bootstrap Data for Small Sample Size Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…First, XGBoost is equipped to handle data with relatively fewer samples. 61 Second, with only six tuning parameters, the XGBoost model can be more easily tuned than any moderately complex neural network model. 62 Third, boosting has proven itself as a formidable predictive modeling technique and often learns complex relationships similarly to neural network architectures that use gradient decent for optimization.…”
Section: ■ Results and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…First, XGBoost is equipped to handle data with relatively fewer samples. 61 Second, with only six tuning parameters, the XGBoost model can be more easily tuned than any moderately complex neural network model. 62 Third, boosting has proven itself as a formidable predictive modeling technique and often learns complex relationships similarly to neural network architectures that use gradient decent for optimization.…”
Section: ■ Results and Discussionmentioning
confidence: 99%
“…There are several advantages to using XGBoost over neural network architectures. First, XGBoost is equipped to handle data with relatively fewer samples . Second, with only six tuning parameters, the XGBoost model can be more easily tuned than any moderately complex neural network model .…”
Section: Resultsmentioning
confidence: 99%
“…al., (2018)] is an effective imputing technique for the numerical and categorical attribute values. Other methods such as Expectation Maximization Imputation (EMI) [Zhao and Duangsoithong, (2020)] calculates the imputation values by exploring the average mean and covariance matrix of the dataset. But the problem with this method is that it uses the highly correlated data amongst the attributes.…”
Section: Previous Research Studymentioning
confidence: 99%
“…Thus, gradient boosted trees result in the identification of variables that have the overall greatest influence on predictive accuracy. The method is amenable to analysis with relatively smaller sample sizes (Zhao & Duangsoithong, 2019), is not impacted by multicollinearity (Ding, Wang, Ma, & Li, 2016), and has been used in the past for prediction tasks involving social media popularity (Li et al, 2017) and tweets (Ong, Rahmanto, Suhartono, Nugroho, & Andangsari, 2017). In this analysis gradient boosted decision trees were implemented via extreme gradient boosting (commonly known as XGBoost; Chen & Guestrin, 2016).…”
Section: The Effect Of Press Releases On Fearmentioning
confidence: 99%