2018
DOI: 10.7287/peerj.preprints.27438v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The effects of change decomposition on code review - a controlled experiment

Abstract: Background. Code review is a cognitively demanding and time-consuming process. Previous qualitative studies hinted at how decomposing change sets into multiple yet internally coherent ones would improve the reviewing process. So far, no quantitative analysis of this hypothesis has been provided. Aims. (1) Quantitatively measure the effects of change decomposition on the outcome of code review (in terms of number of found defects, wrongly reported issues, suggested improvements, time, and understanding)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…For instance, Herzig and Zeller [30] proposed an automated approach relying on static and dynamic analysis to identify which code changes should be separated; Yamauchi suggested a clustering algorithm tuned to identify unrelated changes in a commit message [69]; and Dias et al [22] proposed a methodology to untangle code changes at a finer-granularity, i.e., by selecting the single statement of a code review that should be placed in other commits. More recently researchers also proposed untangling techniques tailored explicitly to code review [5,58] and conducted the first experiments to measure the effects of tangled code changes on code review [21,58] substantiating the value of separating unrelated changes. Despite these advances in splitting algorithms and their immediate practical value, no commercial code review tool offers this feature.…”
Section: Discussionmentioning
confidence: 99%
“…For instance, Herzig and Zeller [30] proposed an automated approach relying on static and dynamic analysis to identify which code changes should be separated; Yamauchi suggested a clustering algorithm tuned to identify unrelated changes in a commit message [69]; and Dias et al [22] proposed a methodology to untangle code changes at a finer-granularity, i.e., by selecting the single statement of a code review that should be placed in other commits. More recently researchers also proposed untangling techniques tailored explicitly to code review [5,58] and conducted the first experiments to measure the effects of tangled code changes on code review [21,58] substantiating the value of separating unrelated changes. Despite these advances in splitting algorithms and their immediate practical value, no commercial code review tool offers this feature.…”
Section: Discussionmentioning
confidence: 99%
“…To regain some degree of control, participants ran all the tasks in a provided virtual machine available in our replication package (di Biase et al, 2018). Moreover, we recorded the screencast of the experiment, therefore not leaving space to misaligned results and mitigating issues of incorrect interpretation.…”
Section: Monitoring Vs Realismmentioning
confidence: 99%
“…The participants were asked to conduct the following four tasks. Further details are available in the online appendix (di Biase et al, 2018).…”
Section: Tasks Of the Experimentsmentioning
confidence: 99%
See 2 more Smart Citations