2007
DOI: 10.1016/j.infsof.2007.02.015
|View full text |Cite
|
Sign up to set email alerts
|

A systematic review of effect size in software engineering experiments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
224
2
2

Year Published

2010
2010
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 301 publications
(236 citation statements)
references
References 36 publications
8
224
2
2
Order By: Relevance
“…The effect sizes calculated for the aforementioned studies [15,18] are r = −.17 and r = −.04, respectively. Both effect sizes are considered small, according to guidelines for empirical studies in software engineering given by Kempenes et al [89]. The effect size estimate, reported in this paper in Section 5.3, is small as well (only 0.3 percent of the variance in BC can be explained by DevM eth).…”
Section: Discussion Conclusion and Future Workmentioning
confidence: 94%
“…The effect sizes calculated for the aforementioned studies [15,18] are r = −.17 and r = −.04, respectively. Both effect sizes are considered small, according to guidelines for empirical studies in software engineering given by Kempenes et al [89]. The effect size estimate, reported in this paper in Section 5.3, is small as well (only 0.3 percent of the variance in BC can be explained by DevM eth).…”
Section: Discussion Conclusion and Future Workmentioning
confidence: 94%
“…To allow an interpretation of effect sizes in a software engineering context, Kampenes et al (2007) therefore proposed magnitude labels based on a systematic review of effect size in 92 software engineering controlled experiments. The sample size is limited but gives a rough estimation of what constitutes small, medium and large effect sizes in the software engineering domain.…”
Section: Guidelines For Interpreting Effect Size Magnitudementioning
confidence: 99%
“…The study described here was designed to investigate the use of multi-site studies in order to address the problems of small sample sizes in Software Engineering experiments, see Dybå et al (2006) and Kampenes et al (2007). The topic for the multi-site experiment concerned the extent to which structured abstracts were clearer and more complete than conventional abstracts.…”
Section: Background To the Multi-site Experimentsmentioning
confidence: 99%
“…This metric assigns more weight to the large N studies and is calculated using formula (5) [46]. For studies in the Software Engineering field, the effect size calculated using point-biserial r correlation is rated as follows: small (0-0.193), medium (0.194-0.456), or large (above 0.456) [49]. The Hunter-Schmidt Method also allows a chi-square significance test of homogeneity across the studies to be performed.…”
Section: Meta-analysismentioning
confidence: 99%
“…For each experiment, this table reports the population effect size through the unweighted mean r after applying Mullen and Rosenthal's cluster analysis method [58]. This effect has also been classified as Small (S), Medium (M) and Large (L) following the same classification [49] for effect sizes as those applied to the weighted effect size in Table 6.…”
Section: Meta-analysismentioning
confidence: 99%