2014
DOI: 10.1007/s10664-014-9339-3
|View full text |Cite
|
Sign up to set email alerts
|

Supporting and accelerating reproducible empirical research in software evolution and maintenance using TraceLab Component Library

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
15
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 54 publications
0
15
0
Order By: Relevance
“…in a system, and those roles can be used for maintenance tasks such as design recovery [38,39], feature location [59,58,131,57], program comprehension, and pattern/anti-pattern detection [63,61,31,98].…”
Section: Unit Test Case Stereotypesmentioning
confidence: 99%
“…in a system, and those roles can be used for maintenance tasks such as design recovery [38,39], feature location [59,58,131,57], program comprehension, and pattern/anti-pattern detection [63,61,31,98].…”
Section: Unit Test Case Stereotypesmentioning
confidence: 99%
“…These techniques operate using a variety of approaches: textual or structural analysis of the source code [32], [35], [36], historic analysis of software change information [31], [34] and dynamic analysis of pass/fail testcase information [15]. As bug reports are usually formulated in natural language and the source code also includes large amounts of comments and identifiers, Information-Retrieval (IR) based bug localization techniques are frequently proposed for bug localization [3], [6], [8], [9], [13], [17], [26], [27], [29], [32]- [35], [37].…”
Section: Introductionmentioning
confidence: 99%
“…A systematic review published in 2016 regarding an international repository including data of thousands of software projects, concluded that several studies lack clarity on how the data were prepared and used, which makes difficult to compare results among studies as well as replicate them [35]. Another study published in 2015 stands out that research studies in SM are notoriously hard to reproduce due to lack of datasets, and implementation details such as parameter values or environmental settings [36]. Finally, a study of 2018 found that many studies hinder replications and decreased the comparability among them because of unavailable data [37].…”
Section: Introductionmentioning
confidence: 99%
“…We perform the comparison among prediction accuracies of the SGB, SLR, MLP, RBFNN, GRNN, ɛ‐SVR, ʋ‐SVR, DT and ARu models taking into account the following issues suggested when machine learning models and non‐machine learning models (such as a SLR) are compared [13] Use of the absolute residuals (ARs) as a criterion for comparing the prediction accuracy between the models. Application of the leave‐one‐out cross‐validation (LOOCV) as a validation method. Description of the manner of how the SGB was optimised. Use of mutually exclusive data sets for training and testing the models. Selection of a suitable statistical test to compare the prediction accuracy among models from data dependence and normality analysis. Statistical significance difference as the means for comparing the prediction accuracy. A systematic review published in 2016 regarding an international repository including data of thousands of software projects, concluded that several studies lack clarity on how the data were prepared and used, which makes difficult to compare results among studies as well as replicate them [35]. Another study published in 2015 stands out that research studies in SM are notoriously hard to reproduce due to lack of datasets, and implementation details such as parameter values or environmental settings [36]. Finally, a study of 2018 found that many studies hinder replications and decreased the comparability among them because of unavailable data [37].…”
Section: Introductionmentioning
confidence: 99%