2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2018
DOI: 10.1109/ipdps.2018.00033
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Gradient Boosted Decision Tree Training on GPUs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(19 citation statements)
references
References 21 publications
0
17
0
2
Order By: Relevance
“…Regarding previous work on GBT training acceleration techniques, multiple CPU threads, distributed system and GPU accelerated method [6], [7], [8], [9], [10] have been reported, and some of their implementations are being widely used. However, the degree of speedup remains small compared to that of Random Forest, which is another popular decision tree ensemble technique.…”
Section: Related Workmentioning
confidence: 99%
“…Regarding previous work on GBT training acceleration techniques, multiple CPU threads, distributed system and GPU accelerated method [6], [7], [8], [9], [10] have been reported, and some of their implementations are being widely used. However, the degree of speedup remains small compared to that of Random Forest, which is another popular decision tree ensemble technique.…”
Section: Related Workmentioning
confidence: 99%
“…In order to increase the accuracy of our decision tree model, we employed gradient boosting which is an ensemble learning technique with the capability of increasing the accuracy of our weak learner; decision tree into a strong learner by sequentially generating the decision tree in such a way that present one is always better than the previous. This is achieved by adding a new adaptive model to optimize the loss function of the previous decision tree algorithm [38][39][40]. GridSearchCV was employed to obtain the optimized hyperparameters in this paper.…”
Section: Gradient Boosted Decision Treementioning
confidence: 99%
“…Em um trabalho posterior, [Mitchell et al 2018] usam uma abordagem XGBoost Multi-GPU, em que os dados de treinamento são particionados e processados por várias GPUs. [Wen et al 2018] paralelizam asárvores do XGBoost em nível de nó, nível de atributo e nível de divisão. Porúltimo, [Browne et al 2018] reorganizam uma floresta de decisão pronta na memória usando um leiaute planejado para acelerar o processo de classificação.…”
Section: Trabalhos Relacionadosunclassified
“…Como os dados de documentos textuais geralmente possuem muitos termos com valores iguais a zero, o conjunto pode ser representado de forma esparsa. Uma matriz esparsa ganha no armazenamento (apenas os valores diferentes de zero são guardados), sua desvantagemé o acesso [Wen et al 2018]. Existem vários esquemas para armazenamento de dados esparsos com suas próprias características.…”
Section: Paralelização Do Algoritmounclassified