2003
DOI: 10.1109/tse.2003.1245300
|View full text |Cite
|
Sign up to set email alerts
|

A simulation study of the model evaluation criterion mmre

Abstract: The Index termsMean magnitude of relative error, simulation, regression analysis, prediction models, software engineering.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
278
2
5

Year Published

2006
2006
2018
2018

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 390 publications
(292 citation statements)
references
References 40 publications
7
278
2
5
Order By: Relevance
“…Thus, this paper still uses the above criteria. However, in addition to that, this paper uses the absolute residual measures because it has shown that the absolute residual measures, in particular the SD Ab.Res., are a better measure than MRE for model comparison [18].…”
Section: Prediction Accuracy Measuresmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, this paper still uses the above criteria. However, in addition to that, this paper uses the absolute residual measures because it has shown that the absolute residual measures, in particular the SD Ab.Res., are a better measure than MRE for model comparison [18].…”
Section: Prediction Accuracy Measuresmentioning
confidence: 99%
“…On the other hand, there is a concern about MRE because MRE is biased [33] and not always reliable as a prediction accuracy measure [18]. However, MRE has been the de facto standard in the software effort prediction literature and no alternative standard exists at present.…”
Section: Prediction Accuracy Measuresmentioning
confidence: 99%
“…Table 3 summarizes the 5 error measures we used in our experiments. These 5 error measures were selected according to the suggestion by a study of Foss et al [26], in which the validity of numerous error measures commonly used in the software development effort estimation literature were systematically evaluated. These 5 measures are considerably more robust and valid than many others such as the well-known MMRE and Pred (25).…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…Software effort estimation studies are often criticized for inadequate use of evaluation metrics such as error measures. For example, use of MMRE by itself is commonly seen in the literature [26], despite widespread criticism of MMRE as an inappropriate and biased measure. However, in this study, we maximized the construct validity by using 5 robust error measures, all of which were guaranteed there robustness by a statistical method [26].…”
Section: Threats To Validitymentioning
confidence: 99%
“…They use the NASA KC1 data set to evaluate their approach, but unfortunately their only performance measure was Magnitude of Relative Error. In literature, this metric has been believed to be asymmetric [27,50], and the use of supporting measures is usually desired to avoid any doubts of validity. Additionally, when they apply their prediction method to an external data set (data set not used in training), their MMRE reaches scores of up to 159%, with a maximum MRE of about 373%.…”
Section: Defect-correction Effort Predictionmentioning
confidence: 99%