2015
DOI: 10.1002/cpe.3660
|View full text |Cite
|
Sign up to set email alerts
|

Parallel construction of classification trees on a GPU

Abstract: Summary Algorithms for constructing tree‐based classifiers are aimed at building an optimal set of rules implicitly described by some dataset of training samples. As the number of samples and/or attributes in the dataset increases, the required construction time becomes the limiting factor for interactive or even functional use. The problem is emphasized if tree derivation is part of an iterative optimization method, such as boosting. Attempts to parallelize the construction of classification trees have theref… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(6 citation statements)
references
References 27 publications
0
5
0
1
Order By: Relevance
“…This can be used in different tasks such as: classification, regression and other analyses, as they improve forecasting models and can also make combinations between trees (Rokach, 2016). There are several studies on how to make a decision tree such as Luštrek et al (2016), Levatić et al (2017), Strnad and Nerat (2016), among others. A situation can be modeled in order to direct more efficient decisionmaking, where its predictive performance is slightly better than the standard algorithms (González, Herrera and Garcia, 2015).…”
Section: Machine Learning and Methods Employedmentioning
confidence: 99%
“…This can be used in different tasks such as: classification, regression and other analyses, as they improve forecasting models and can also make combinations between trees (Rokach, 2016). There are several studies on how to make a decision tree such as Luštrek et al (2016), Levatić et al (2017), Strnad and Nerat (2016), among others. A situation can be modeled in order to direct more efficient decisionmaking, where its predictive performance is slightly better than the standard algorithms (González, Herrera and Garcia, 2015).…”
Section: Machine Learning and Methods Employedmentioning
confidence: 99%
“…This implementation is done using a one-dimensional grid where each block can produce a dataset to train each tree. Similarly, D. Strnad and A. Nerat [6] presented a GPU-based parallel design for a classification tree. The proposed algorithm has three parallelism levels: 1) node level: to find the best split point for multiple attributes on a node concurrently, 2) attribute level: to find the best attribute for multiple nodes concurrently, and 3) split level to evaluate multiple possible split points.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Os autores usam OpenCL, por funcionar em diferentes arquiteturas. [Strnad and Nerat 2016] exploram a construção deárvores usando paralelismo em nível de nó, nível de atributo e nível de divisão. A abordagem de [Zhang et al 2017] consiste em usar histogramas dos atributos para encontrar a melhor divisão na GPU, a diferença aquié que asárvores são de regressão com boosting, um tipo de ensemble iterativo (cada novo modelo depende do anterior).…”
Section: Trabalhos Relacionadosunclassified