2018 IEEE International Conference on Data Mining (ICDM) 2018
DOI: 10.1109/icdm.2018.00017
|View full text |Cite
|
Sign up to set email alerts
|

Realization of Random Forest for Real-Time Evaluation through Tree Framing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
4

Relationship

5
5

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 17 publications
0
21
0
Order By: Relevance
“…An additional advantage of the random forest algorithm is its suitability for implementation in the prototypes of sensor nodes. It should be noted that several approaches to implementation of the random forest classifier for embedded devices are available in the literature [ 53 , 54 , 55 ]. In this study the random forest algorithm was used for activity recognition as well as for data classification to decide which data have to be transmitted.…”
Section: Methodsmentioning
confidence: 99%
“…An additional advantage of the random forest algorithm is its suitability for implementation in the prototypes of sensor nodes. It should be noted that several approaches to implementation of the random forest classifier for embedded devices are available in the literature [ 53 , 54 , 55 ]. In this study the random forest algorithm was used for activity recognition as well as for data classification to decide which data have to be transmitted.…”
Section: Methodsmentioning
confidence: 99%
“…DTs with small depths are very fast classifiers, and it is convenient to obtain their C++ code for the use in embedded systems. When the classifier model is trained and is ready for operation, the tree structure and the split values of the tree from the trained model are extracted, and then converted to deployable C++ code, as proposed by Buschjäger et al [21] as a standard if-else tree. The tree then is appended as the next step in the pipeline after feature extraction.…”
Section: Materials and Methodsmentioning
confidence: 99%
“…Here, we evaluate the execution time of LBW BNNs and compare it to regular ones using commonly available CPUs for the BNN operations. As experiment platforms for the execution time measurements of machine learning models we use the same setup as the work in [36]. Furthermore, we generate C++ code from PyTorch models with the same framework as the study in [37].…”
Section: Lbw Execution Of Bnnsmentioning
confidence: 99%