2017 IEEE International Conference on Software Maintenance and Evolution (ICSME) 2017
DOI: 10.1109/icsme.2017.40
|View full text |Cite
|
Sign up to set email alerts
|

Confusion Detection in Code Reviews

Abstract: Code reviews are an important mechanism for assuring quality of source code changes. Reviewers can either add general comments pertaining to the entire change or pinpoint concerns or shortcomings about a specific part of the change using inline comments. Recent studies show that reviewers often do not understand the change being reviewed and its context. Our ultimate goal is to identify the factors that confuse code reviewers and understand how confusion impacts the efficiency and effectiveness of code review(… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 30 publications
(20 citation statements)
references
References 38 publications
1
18
0
1
Order By: Relevance
“…To better understand how integrators review code Yu et al investigated the latency of open-source code reviews, coming to the conclusion that many process related features impact the latency of code reviews [29]. Ebert et al attempted to find expressions of confusion in code reviews, to further understand how confusion impacts code reviews [30]. Beller et al investigate what types of changes are made during a code review, finding that most changes are made to improve evolvability [31].…”
Section: Related Workmentioning
confidence: 99%
“…To better understand how integrators review code Yu et al investigated the latency of open-source code reviews, coming to the conclusion that many process related features impact the latency of code reviews [29]. Ebert et al attempted to find expressions of confusion in code reviews, to further understand how confusion impacts code reviews [30]. Beller et al investigate what types of changes are made during a code review, finding that most changes are made to improve evolvability [31].…”
Section: Related Workmentioning
confidence: 99%
“…When studying the cognitive level of think-aloud statements during reviews, McMeekin et al (2009) found that more structured techniques lead to higher cognition levels. Recent work by Ebert et al (2017) tries to measure signs of confusion from remarks written by the reviewers. The model of human and computer as a joint cognitive system (Dowell and Long, 1998), also called distributed cognition, has been proposed by Walenstein to study cognitive load in software development tools (Walenstein, 2003(Walenstein, , 2002.…”
Section: Working Memory and Code Reviewmentioning
confidence: 99%
“…We conduct an explanatory case study [6] on Android because it is a large and well-known open source ecosystem that adopts a rigorous code review process using Gerrit. We extracted our annotation sample from the dataset of inline comments we previously collected from the entire Android ecosystem [17]. Our annotation sample contains 499 questions extracted from randomly selected 399 inline comments (corresponding to the confidence interval of less than 5% and confidence level of 95% from a population of 10,965 questions).…”
Section: A Annotation Sample From Android Code Reviewsmentioning
confidence: 99%