2012
DOI: 10.1117/1.jrs.6.063557
|View full text |Cite
|
Sign up to set email alerts
|

Derivation of biophysical variables from Earth observation data: validation and statistical measures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
121
0
2

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 137 publications
(125 citation statements)
references
References 67 publications
2
121
0
2
Order By: Relevance
“…All measures indicate the degree of association between predicted and estimated values of the same parameter and give thus an indication of prediction efficiency. Richter et al [42] recommended the combined set of r 2 , RMSE and NRMSE, amongst others, for comprehensively quantification the performance of vegetation biophysical models. For a single SI model selected, an option has been foreseen in the GUI, to generate a scatterplot of the retrievals in function of the cal/val measurements (see Section 4).…”
Section: Calibration/validation Assessmentmentioning
confidence: 99%
“…All measures indicate the degree of association between predicted and estimated values of the same parameter and give thus an indication of prediction efficiency. Richter et al [42] recommended the combined set of r 2 , RMSE and NRMSE, amongst others, for comprehensively quantification the performance of vegetation biophysical models. For a single SI model selected, an option has been foreseen in the GUI, to generate a scatterplot of the retrievals in function of the cal/val measurements (see Section 4).…”
Section: Calibration/validation Assessmentmentioning
confidence: 99%
“…The same procedure was repeated for MODIS data. Bootstrapping techniques evaluated the empirical model robustness, which is a valid alternative to traditional leave-one-out methods to validate regression models predictability according to Richter et al (2012) and following Steyerberg et al (2001) that recommends two hundred simulations. Later the median value of each statistics was used as indicative of its performance.…”
Section: Empirical Models Fittingmentioning
confidence: 99%
“…RMSE cannot compare the error of different variables with different units. To address this limitation in order to compare the model performances between different variables, RRMSE divides RMSE by the average of the observed values (V obs ; Richter et al, 2012):…”
Section: Empirical Models Fittingmentioning
confidence: 99%
“…While the methodologies reviewed in other papers (e.g., Bellocchi et al 2010;Richter et al 2012) represent the state of the art in terms of structured numerically based evaluation, it cannot be assumed that this analysis is all that is required for model outputs to be accepted particularly when models are used with and for stakeholders. The numerical analysis may provide credibility within the technoscientific research community.…”
Section: Deliberative Processes For Comprehensive Model Evaluationmentioning
confidence: 99%
“…Several evaluation methods are available, but, usually, only a limited number of methods are used in modeling projects (as documented, for instance, by Richter et al 2012 andRitter andMuñoz-Carpena 2013), often due to time and resource constraints. This is also because different users of models (and beneficiaries of model outputs) may have different thresholds for confidence: some may derive their confidence simply from the model reports displayed, and others may require more in-depth evaluation before they are willing to believe the results.…”
Section: Concepts and Toolsmentioning
confidence: 99%