2021
DOI: 10.1016/j.jpdc.2021.04.008
|View full text |Cite
|
Sign up to set email alerts
|

Efficient traversal of decision tree ensembles with FPGAs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 43 publications
0
1
0
Order By: Relevance
“…An example of another line of research is an implementation of a state-of-the-art tree traversal algorithm [30]. They present a system-on-chip (SoC)-based field-programmable gate arrays (FPGAs) implementation of the QuickScorer algorithm [31], which is an efficient tree traversal algorithm designed for large binary tree ensembles.…”
Section: Related Workmentioning
confidence: 99%
“…An example of another line of research is an implementation of a state-of-the-art tree traversal algorithm [30]. They present a system-on-chip (SoC)-based field-programmable gate arrays (FPGAs) implementation of the QuickScorer algorithm [31], which is an efficient tree traversal algorithm designed for large binary tree ensembles.…”
Section: Related Workmentioning
confidence: 99%
“…Lucchese et al (2015b) apply these ideas to QuickScorer with different flavors of blocking (Dato et al, 2016), and make further improvements through vectorization over multiple documents (Lucchese et al, 2016b), multi-core and GPU parallelism (Lettich et al, 2019). More recently, Gil-Costa et al (2022) and Molina et al (2021) propose a novel design of of the QuickScorer algorithm and the application of binning or quantization techniques to tree ensembles to fully leverage novel, energy-efficient field-programmable gate arrays (FPGAs). Ye et al (2018) take the data structure in QuickScorer and make it more compact in their algorithm, RapidScorer.…”
Section: Feature-major Traversalmentioning
confidence: 99%
“…As a result, the first input instance in the dataset will be the one with the largest number of non-zero attributes. The algorithm then calculates current_sparse_perc at lines 9-11, prior to entering the inner loop (lines [12][13][14][15][16][17][18]. In this loop, the algorithm starts with a selection of the first input instance from the dataset.…”
Section: Attribute Sparse Training Of Svmmentioning
confidence: 99%
“…A significant effort has been made by the scientific community in this direction. Regarding DTs, a number of FPGA DT implementations were presented in the literature [15][16][17][18][19][20]. These proposed architectures dealt with the acceleration of the axis-parallel, oblique or nonlinear DTs and the ensembles of dense DTs.…”
Section: Introductionmentioning
confidence: 99%