Encyclopedia of Big Data Technologies 2018
DOI: 10.1007/978-3-319-63962-8_268-2
|View full text |Cite
|
Sign up to set email alerts
|

Julia

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…However, sufficient parameters are critical to capture nuances and prevent under-fitting issues. With the use of regularization, poor generalization can often be avoided (Voulgaris, 2016). The regularization of models uses parameters differently by merging them while simultaneously fine-tuning them.…”
Section: Forecasting Modelsmentioning
confidence: 99%
“…However, sufficient parameters are critical to capture nuances and prevent under-fitting issues. With the use of regularization, poor generalization can often be avoided (Voulgaris, 2016). The regularization of models uses parameters differently by merging them while simultaneously fine-tuning them.…”
Section: Forecasting Modelsmentioning
confidence: 99%
“…XGBoost is based on a gradient algorithm that uses scalable, regularized tree boosting to provide flexible and optimized regression and classification trees (Chen and Guestrin, 2016;Hastie et al, 2001a, b). Through regularization, poor generalization within models' predictive outcomes can be mitigated (Voulgaris, 2016), while inputting a sizable amount of independent factors can minimize under-fitting issues. Additionally, incorporating broader scale matrixes into XGBoost models allows for capturing nuances within independent factors and contribution attributes (Ampountolas and Legg, 2021).…”
Section: Gradient Boostingmentioning
confidence: 99%