2015
DOI: 10.1166/jmihi.2015.1423
|View full text |Cite
|
Sign up to set email alerts
|

Combining Bootstrapping Samples, Random Subspaces and Random Forests to Build Classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(14 citation statements)
references
References 0 publications
0
12
0
2
Order By: Relevance
“…Random decision forest is similar to the bootstrapping algorithm with decision tree (CART) model. Random decision forest tries to build k different decision trees by picking a random subset S of training samples [ 31 ]. It generates the full Iterative Dichotomiser 3 (ID3) [ 32 ] trees with no pruning.…”
Section: Methodsmentioning
confidence: 99%
“…Random decision forest is similar to the bootstrapping algorithm with decision tree (CART) model. Random decision forest tries to build k different decision trees by picking a random subset S of training samples [ 31 ]. It generates the full Iterative Dichotomiser 3 (ID3) [ 32 ] trees with no pruning.…”
Section: Methodsmentioning
confidence: 99%
“…A random subspace-based method was applied to obtain 1,000 descriptor subsets of 200 potential independent variables each. In the random subspace approach, the molecular descriptors are randomly sampled, and each model is trained on one subset of the feature space (Yu et al, 2012; El Habib Daho and Chikh, 2015); as a result, individual models do not over-focus on features that display high explanatory power in the training set.…”
Section: Methodsmentioning
confidence: 99%
“…Classifier ensembles are known to provide better generalization and accuracy than single model classifiers (El Habib Daho and Chikh, 2015; Carbonneau et al, 2016; Min, 2016). Here, we have used two retrospective virtual screening campaigns to assess the performance of individual classifiers and classifier ensembles.…”
Section: Methodsmentioning
confidence: 99%
“…Unlike the selection measure, choice of pruning methods is a more crucial factor that directly affects performance of tree-based classifiers [43]. Previous studies have shown that generalization error converges as tree number increases even without pruning the tree [44]. This is because a tree can grow to its maximum depth using a combination of features each time it has a new training data.…”
Section: Classification Algorithmmentioning
confidence: 99%