Proceedings of the 7th International Conference on Predictive Models in Software Engineering 2011
DOI: 10.1145/2020390.2020399
|View full text |Cite
|
Sign up to set email alerts
|

A principled evaluation of ensembles of learning machines for software effort estimation

Abstract: Background: Software effort estimation (SEE) is a task of strategic importance in software management. Recently, some studies have attempted to use ensembles of learning machines for this task. Aims: We aim at (1) evaluating whether readily available ensemble methods generally improve SEE given by single learning machines and which of them would be more useful; getting insight on (2) how to improve SEE; and (3) how to choose machine learning (ML) models for SEE. Method: A principled and comprehensive statistic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 28 publications
(25 citation statements)
references
References 32 publications
(50 reference statements)
0
25
0
Order By: Relevance
“…Chulani et al [1999] presented another study using a Bayesian approach to combine a priori information based on expert knowledge to a linear regression model based on log transformation of the data. The proposed approach outperforms linear regression models based on log transformed [Minku and Yao 2011]. data in terms of PRED(20), PRED(25) and PRED(30) on one data set and its extended version. No comparison with Shepperd and Schofield [1997]'s work was given.…”
Section: Machine Learning For Seementioning
confidence: 99%
See 2 more Smart Citations
“…Chulani et al [1999] presented another study using a Bayesian approach to combine a priori information based on expert knowledge to a linear regression model based on log transformation of the data. The proposed approach outperforms linear regression models based on log transformed [Minku and Yao 2011]. data in terms of PRED(20), PRED(25) and PRED(30) on one data set and its extended version. No comparison with Shepperd and Schofield [1997]'s work was given.…”
Section: Machine Learning For Seementioning
confidence: 99%
“…In the present work, we look into a type of machine learning method which has recently attracted attention from the SEE community [Braga et al 2007;Kultur et al 2009;Kocaguneli et al 2009;Minku and Yao 2011], namely ensembles of learning machines. Ensembles are sets of learners trained to perform the same task and combined with the aim of improving predictive performance [Chen and Yao 2009].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The parameters of the RTs were minimum total weight of one for the instances in a leaf, and minimum proportion 0.0001 of the variance on all the data that need to be present at a node in order for splitting to be performed. These parameters were shown to be appropriate in the literature (Minku and Yao 2011).…”
Section: Methodsmentioning
confidence: 93%
“…Less attention has been focused on MDT methods themselves. In a more recent study, Huang et al (2015) found that only some of the former software effort estimation studies have considered the significance of the MDTs, of which only Minku and Yao (2011) (Myrtveit et al, 2001;Strike et al, 2001), and the prediction error may be introduced (Mittas and Angelis, 2010). MEI is efficient and has been involved in SEE as the most popular imputation approach; however, it will cause bias to data.…”
Section: Knn Imputation Improvementmentioning
confidence: 99%