2014
DOI: 10.1016/j.ecolind.2013.06.017
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Canada yew (Taxus canadensis Marsh.) distribution and abundance in the boreal forest of northeastern Ontario, Canada

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…In total, 16 classification methods were employed across the 128 Canadian wetland classification studies. The RF [94][95][96], ML [97,98], Decision Tree (DT) [38,[99][100][101][102], SVM [46][47][48], Multiple Classifier System (MCS) [11,103], Iterative Self-Organizing Data Analysis Technique (ISODATA) [104,105], CNN [21,27,54], k-Nearest Neighbors (k-NN) [106,107], and Artificial Neural Network (ANN) [30,[108][109][110] were the most commonly used algorithms. The Linear Discriminant Analysis (LDA) [83,111,112], Fuzzy Rule-Based Classification Systems (FRBCSs) [11,19], Markov Random Fields (MRF)-based method [113,114], k-means, and classification methods based on polarization target decomposition [115,116] were used once or less than three times and, here, were categorized as the "Other" group.…”
Section: Table A1mentioning
confidence: 99%
“…In total, 16 classification methods were employed across the 128 Canadian wetland classification studies. The RF [94][95][96], ML [97,98], Decision Tree (DT) [38,[99][100][101][102], SVM [46][47][48], Multiple Classifier System (MCS) [11,103], Iterative Self-Organizing Data Analysis Technique (ISODATA) [104,105], CNN [21,27,54], k-Nearest Neighbors (k-NN) [106,107], and Artificial Neural Network (ANN) [30,[108][109][110] were the most commonly used algorithms. The Linear Discriminant Analysis (LDA) [83,111,112], Fuzzy Rule-Based Classification Systems (FRBCSs) [11,19], Markov Random Fields (MRF)-based method [113,114], k-means, and classification methods based on polarization target decomposition [115,116] were used once or less than three times and, here, were categorized as the "Other" group.…”
Section: Table A1mentioning
confidence: 99%
“…The benefit of using regression trees is that the predictors can be both categorical and continuous, and the approach is non-parametric (i.e., does not assume a normal response) [39]. The regression tree was set to have a minimum bucket value of three and a minimum complexity parameter value of 0.001, in order to follow standard procedures to identify the ideal number of nodes in the tree [40]. The result of a regression tree is a dendrogram or "tree" which splits the data into smaller, more homogenous groups based on the importance of variables on the nodes; the most important variables are found on the top node and least important on lower nodes [39].…”
Section: Hierarchical Classification Modelsmentioning
confidence: 99%