2014
DOI: 10.1016/j.cageo.2014.09.007
|View full text |Cite
|
Sign up to set email alerts
|

Regression trees for modeling geochemical data—An application to Late Jurassic carbonates (Ammonitico Rosso)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(3 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…In addition to constructing each tree using a different bootstrap sample of the data, RFs change how the classification or regression trees are constructed. In standard trees, each node is split using the best split among all variables while in an RF, each node is split using the best among a subset of predictors randomly chosen at that node [53]. This somewhat counterintuitive strategy turns out to perform very well compared with many other classifiers, including discriminant analysis, SVMs and NNs, and is robust against overfitting [4] [10].…”
Section: Random Forestmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to constructing each tree using a different bootstrap sample of the data, RFs change how the classification or regression trees are constructed. In standard trees, each node is split using the best split among all variables while in an RF, each node is split using the best among a subset of predictors randomly chosen at that node [53]. This somewhat counterintuitive strategy turns out to perform very well compared with many other classifiers, including discriminant analysis, SVMs and NNs, and is robust against overfitting [4] [10].…”
Section: Random Forestmentioning
confidence: 99%
“…A main advantage of SVM classification is that it performs well on datasets that have many attributes, even when there are only a few cases that are available for the training process [7]. However, several disadvantages of SVM classification include limitations in speed and size during both training and testing phase of the algorithm and the selection of the kernel function parameters [53].…”
Section: Attributes Of the Classification Algorithmsmentioning
confidence: 99%
“…Support vector machine (SVM) classification methods do have some problems, though. The training and testing phases of the algorithm are not very good, and there are limits on speed and size when choosing kernel function parameters [25]. Random forests are a new and beneficial classification method that only needs two parameters to be set when making a predictive model: the number of decision trees that are formed (t) and the number of input features that are taken into account when each node of the decision tree is split up (m) [26].…”
Section: Introductionmentioning
confidence: 99%