2015
DOI: 10.1016/j.infsof.2014.05.010
|View full text |Cite
|
Sign up to set email alerts
|

A framework for comparing multiple cost estimation methods using an automated visualization toolkit

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 24 publications
0
20
0
Order By: Relevance
“…SA has been used to assess the performance of SDEE techniques in a number of studies. [15][16][17][18] Kocaguneli et al used SA along with 6 accuracy metrics (MBRE, MIBRE, mean magnitude of error relative, MMRE, Pred (0.25), and MAR) to compare cross-company and within-company datasets using a self-tuning analogy-based effort estimation method called TEAK. 18 They found that different accuracy metrics can result in different conclusions.…”
Section: Standardized Accuracymentioning
confidence: 99%
“…SA has been used to assess the performance of SDEE techniques in a number of studies. [15][16][17][18] Kocaguneli et al used SA along with 6 accuracy metrics (MBRE, MIBRE, mean magnitude of error relative, MMRE, Pred (0.25), and MAR) to compare cross-company and within-company datasets using a self-tuning analogy-based effort estimation method called TEAK. 18 They found that different accuracy metrics can result in different conclusions.…”
Section: Standardized Accuracymentioning
confidence: 99%
“…In addition, the ongoing use of a baseline model in the literature would give a single point of comparison, allowing a meaningful assessment of any new method against previous work. Work in SEE involving regression error curves [Bi and Bennett 2003;Mittas and Angelis 2012] and model comparison frameworks [Mittas et al 2015] have included a naïve baseline model prediction using the mean or median of the data. In addition, a baseline model using the mean of a set of random samplings (with replacement) has also been proposed [Shepperd and MacDonell 2012].…”
Section: Baseline Modelsmentioning
confidence: 99%
“…22,23,70 In other words, each technique can be ranked differently according to different accuracy criteria, which can lead to contradictory results (ie, 1 criterion selects model A as the best, whereas another selects model B). 94 For example, Idri et al 70 showed that not all the best techniques according to SA were also the best with regard to Pred (p).…”
Section: Methodology Usedmentioning
confidence: 99%
“…The rationale for using many performance measures is that prior studies showed that selection of the best estimation technique depends on which performance indicator was used since relying on only 1 criterion may lead to biased conclusions . In other words, each technique can be ranked differently according to different accuracy criteria, which can lead to contradictory results (ie, 1 criterion selects model A as the best, whereas another selects model B) . For example, Idri et al showed that not all the best techniques according to SA were also the best with regard to Pred (p). Build and evaluate filter fuzzy analogy ensembles (RQ2) …”
Section: Empirical Designmentioning
confidence: 99%