Proceedings of the 9th International Conference on Predictive Models in Software Engineering 2013
DOI: 10.1145/2499393.2499394
|View full text |Cite
|
Sign up to set email alerts
|

The impact of parameter tuning on software effort estimation using learning machines

Abstract: Background: The use of machine learning approaches for software effort estimation (SEE) has been studied for more than a decade. Most studies performed comparisons of different learning machines on a number of data sets. However, most learning machines have more than one parameter that needs to be tuned, and it is unknown to what extent parameter settings may affect their performance in SEE. Many works seem to make an implicit assumption that parameter settings would not change the outcomes significantly. Aims… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
48
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 70 publications
(52 citation statements)
references
References 33 publications
4
48
0
Order By: Relevance
“…In particular, other clustering methods, base learners, project input attributes, project features for clustering, parameter values [41] and (automated) tuning procedures [1,9] could be investigated, besides the proposal of a more streamlined approach to update CC models.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, other clustering methods, base learners, project input attributes, project features for clustering, parameter values [41] and (automated) tuning procedures [1,9] could be investigated, besides the proposal of a more streamlined approach to update CC models.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, they have similar threats to validity as follows. When using machine learning approaches, it is important that the approaches being compared use fair parameter choices in comparison to each other in order to address internal validity [29,41]. In this paper, both the RTs used as WC learners and within Dycom and Clustering Dycom used the same parameters, which were the ones more likely to obtain good results in the literature [34].…”
Section: Threats To Validitymentioning
confidence: 99%
“…Finally, they concluded that learning learners is an active research area and much further work is required before we can understand the costs and benefits of this approach. In (Song et al 2013) the authors proposed a framework to investigate to what extent parameter settings affect the performance of learning machines in software effort estimation, and what learning machines are more sensitive to their parameters. They concluded that different learning machines have different sensitivity to their parameter settings.…”
Section: Framework For Benchmarking Prediction Modelsmentioning
confidence: 99%
“…1 At time t, we may wish to estimate projects p t+1 and p t+2 based on a model trained with all projects completed up to time t. Then, once we reach time t + 1, we may wish to provide an updated prediction for project p t+2 based on a model trained with all projects completed up to time t + 1. This approach has been used, for example, in a study of the impact of parameter tuning on SEE (Song et al 2013). This study was based on three WC datasets and five ML approaches (MLP, Bag + MLP, RT, Bag + RT and k-NN).…”
Section: Chronological Splitting Approachesmentioning
confidence: 99%
“…This section analyses DCL's sensitivity to parameters, revealing whether DCL's predictive performance can be influenced by parameter tuning (Song et al 2013) and which parameters influence its predictive performance the most.…”
Section: The Impact Of Dcl's Parametersmentioning
confidence: 99%