2020
DOI: 10.3390/electronics9010099
|View full text |Cite
|
Sign up to set email alerts
|

Weighted Random Forests to Improve Arrhythmia Classification

Abstract: Construction of an ensemble model is a process of combining many diverse base predictive learners. It arises questions of how to weight each model and how to tune the parameters of the weighting process. The most straightforward approach is simply to average the base models. However, numerous studies have shown that a weighted ensemble can provide superior prediction results to a simple average of models. The main goals of this article are to propose a new weighting algorithm applicable for each tree in the Ra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
33
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 29 publications
(33 citation statements)
references
References 61 publications
0
33
0
Order By: Relevance
“…The Random Forest algorithm introduced by Breiman [ 24 ] works by constructing many decision trees and outputting the prediction of the individual trees by utilizing a training sample (training dataset ) of observations, target variables (where is an empirical realization of ) and predictor variables , where each feature can take a value from its own set of possible values (moreover, is an empirical realization of ). The main objective of each decision tree is to find a model ( ) for predicting the values of from new values [ 7 ]. In theory, the solution is simply a recursive partition of space into the disjointed rectangular subspace of (eventually achieving a final node called the leaf, denoted as ) in such a way that the predicted ( ) value of minimizes the total impurity of its child nodes (it is usually assumed that each parent node has two children, i.e., binary tree are considered).…”
Section: Weighted Quantile Regression Forestsmentioning
confidence: 99%
See 4 more Smart Citations
“…The Random Forest algorithm introduced by Breiman [ 24 ] works by constructing many decision trees and outputting the prediction of the individual trees by utilizing a training sample (training dataset ) of observations, target variables (where is an empirical realization of ) and predictor variables , where each feature can take a value from its own set of possible values (moreover, is an empirical realization of ). The main objective of each decision tree is to find a model ( ) for predicting the values of from new values [ 7 ]. In theory, the solution is simply a recursive partition of space into the disjointed rectangular subspace of (eventually achieving a final node called the leaf, denoted as ) in such a way that the predicted ( ) value of minimizes the total impurity of its child nodes (it is usually assumed that each parent node has two children, i.e., binary tree are considered).…”
Section: Weighted Quantile Regression Forestsmentioning
confidence: 99%
“…In theory, the solution is simply a recursive partition of space into the disjointed rectangular subspace of (eventually achieving a final node called the leaf, denoted as ) in such a way that the predicted ( ) value of minimizes the total impurity of its child nodes (it is usually assumed that each parent node has two children, i.e., binary tree are considered). One of the first and widely used decision tree algorithms is the classification and regression tree (CART) [ 58 ], employing a measure of node impurity based on the distribution of the observed values in the node by splitting a node that minimizes the total impurity of its two child nodes, defined by the total sum of squares [ 7 ]: where denotes the average value of vector over all observations belonging to a particular node. The process is applied recursively to the data in each child node.…”
Section: Weighted Quantile Regression Forestsmentioning
confidence: 99%
See 3 more Smart Citations