2019
DOI: 10.1007/s10664-019-09686-w
|View full text |Cite
|
Sign up to set email alerts
|

A novel online supervised hyperparameter tuning procedure applied to cross-company software effort estimation

Abstract: Software effort estimation is an online supervised learning problem, where new training projects may become available over time. In this scenario, the Cross-Company (CC) approach Dycom can drastically reduce the number of Within-Company (WC) projects needed for training, saving their collection cost. However, Dycom requires CC projects to be split into subsets. Both the number and composition of such subsets can affect Dycom's predictive performance. Even though clustering methods could be used to automaticall… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(17 citation statements)
references
References 54 publications
0
17
0
Order By: Relevance
“…Many ML techniques have a number of hyperparameters that can be tuned (e.g., the learning rate, number of hidden units, or activation function) [23]. Hyperparameter tuning can have a major impact on model accuracy, and can enable significant improvements in the results of even simple ML techniques.…”
Section: Rq6 (Challenges)mentioning
confidence: 99%
“…Many ML techniques have a number of hyperparameters that can be tuned (e.g., the learning rate, number of hidden units, or activation function) [23]. Hyperparameter tuning can have a major impact on model accuracy, and can enable significant improvements in the results of even simple ML techniques.…”
Section: Rq6 (Challenges)mentioning
confidence: 99%
“…In terms of online hyperparameter tuning algorithms, there are few works [17,[23][24][25][26] that use support vector machines together with batch processing, gradient solutions combined with brute force or genetic algorithms to optimise hyperparameters. Lawal and Abdulkarim [23] introduce an incremental learning-model selection method for data stream batches.…”
Section: Related Workmentioning
confidence: 99%
“…The algorithm computes the hyperparameter gradients on the fly whenever a new datum is observed and, then, updates smoothly the hyperparameters with the average of the past and current hypergradients. Minku [25] proposed an online hyperparameter tuning method that maintains a number of model instances created from different subsets. The method applies computational brute force to find the model instance with the smallest validation error.…”
Section: Related Workmentioning
confidence: 99%
“…In terms of how to set the parameters in real world problems, the difficulty is that the best values may change over time. Potentially, one could run multiple versions of the approach with different parameter settings [61]. The parameters 'β', 'θ' and 'Period' were analysed and their effect on prediction accuracy, ensemble size and drift detections.…”
Section: Parameters Analysismentioning
confidence: 99%