Hyperparameter Tuning for Machine and Deep Learning With R 2023
DOI: 10.1007/978-981-19-5170-1_9
|View full text |Cite
|
Sign up to set email alerts
|

Case Study II: Tuning of Gradient Boosting (xgboost)

Abstract: This case study gives a hands-on description of Hyperparameter Tuning (HPT) methods discussed in this book. The Extreme Gradient Boosting (XGBoost) method and its implementation was chosen, because it is one of the most powerful methods in many Machine Learning (ML) tasks, especially when standard tabular data should be analyzed. This case study follows the same HPT pipeline as the first and third studies: after the data set is provided and pre-processed, the experimental design is set up. Next, the HPT exper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…The depth of tree controls the depth of the individual trees. Smaller depth trees such as decision stumps are computationally efficient; however, higher depth trees allow the algorithm to capture unique interactions but also increase the risk of overfitting (Bartz-Beielstein et al , 2023). Minimum number of observations control complexity of each tree.…”
Section: Methodsmentioning
confidence: 99%
“…The depth of tree controls the depth of the individual trees. Smaller depth trees such as decision stumps are computationally efficient; however, higher depth trees allow the algorithm to capture unique interactions but also increase the risk of overfitting (Bartz-Beielstein et al , 2023). Minimum number of observations control complexity of each tree.…”
Section: Methodsmentioning
confidence: 99%