2021
DOI: 10.1016/j.rse.2021.112294
|View full text |Cite
|
Sign up to set email alerts
|

Completing the machine learning saga in fractional snow cover estimation from MODIS Terra reflectance data: Random forests versus support vector regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
26
2
3

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 68 publications
(33 citation statements)
references
References 89 publications
2
26
2
3
Order By: Relevance
“…Compared to alternative methods such as artificial neural networks (like the GRNN model used here), SVR can yield comparable accuracy using a much smaller training sample size (Mountrakis et al, 2011). This is in line with the "support vector" concept that relies only on a few data points to define the position of the decision surface (Huang and Zhao, 2018;Kuter, 2021;Mountrakis et al, 2011). In light of this research, the SVR model had the ability to accurately modeling δ 15 N-NO 3 − values in surface water relying on commonly measured hydro-chemical variables, and thereby serves as an indirect, rapid, and convenient tool for environmental stable isotope prediction.…”
Section: Discussionsupporting
confidence: 64%
“…Compared to alternative methods such as artificial neural networks (like the GRNN model used here), SVR can yield comparable accuracy using a much smaller training sample size (Mountrakis et al, 2011). This is in line with the "support vector" concept that relies only on a few data points to define the position of the decision surface (Huang and Zhao, 2018;Kuter, 2021;Mountrakis et al, 2011). In light of this research, the SVR model had the ability to accurately modeling δ 15 N-NO 3 − values in surface water relying on commonly measured hydro-chemical variables, and thereby serves as an indirect, rapid, and convenient tool for environmental stable isotope prediction.…”
Section: Discussionsupporting
confidence: 64%
“…In the last step, a linear model is built by the new derived feature space to minimize the errors [73]. The SVM algorithm based on the Radial Basis Function (RBF) (the most popular choice in the literature) had two hyper-parameters: the penalty factor C aiming to find a trade-off between the fitting error and the model "complexity", and the kernel width gamma [74]. The SVM hyper-parameters were tuned with a grid search method.…”
Section: Support Vector Machine (Svm)mentioning
confidence: 99%
“…(2) the multicollinearity problem between features can be effectively avoided, supporting high-dimensional features [114,115]; (3) the hierarchical relationships can be characterized, showing good stability in regard to missing and unbalanced data [116]; (4) it has strong robustness to the outliers in the variables, and handles these outliers through the binding method [117]; (5) they can also provide the importance of explanatory variables to determine their contributions for the model and the main predictive factors [117,118]. Moreover, the RF algorithm also has significant advantages in terms of OOB error, and it is generally considered a good estimate of the expected error of unseen data [119].…”
Section: Applicability Of Machine Learning Algorithmsmentioning
confidence: 99%