2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS) 2019
DOI: 10.1109/icse-seis.2019.00014
|View full text |Cite
|
Sign up to set email alerts
|

Beyond the Code Itself: How Programmers Really Look at Pull Requests

Abstract: Developers in open source projects must make decisions on contributions from other community members, such as whether or not to accept a pull request. However, secondary factors-beyond the code itself-can influence those decisions. For example, signals from GitHub profiles, such as a number of followers, activity, names, or gender can also be considered when developers make decisions. In this paper, we examine how developers use these signals (or not) when making decisions about code contributions. To evaluate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(29 citation statements)
references
References 21 publications
1
28
0
Order By: Relevance
“…They found that a complete scan of the whole code helps students to find defects f and share similar findings on code reading patterns. Besides general code reviewing patterns, Ford et al used eye-tacking to study the influence of supplemental technical signals (such as the number of followers, activity, names, or gender) on Pull Request acceptance via an eye tracker [6].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They found that a complete scan of the whole code helps students to find defects f and share similar findings on code reading patterns. Besides general code reviewing patterns, Ford et al used eye-tacking to study the influence of supplemental technical signals (such as the number of followers, activity, names, or gender) on Pull Request acceptance via an eye tracker [6].…”
Section: Related Workmentioning
confidence: 99%
“…Code review involves one or more developers examine a proposed change to a codebase (e.g., a patch and its associated documentation) written by others and deciding whether the change should be accepted and integrated into the codebase or rejected for further refinement. Previous research has found that that code reviewers do not always prioritize the actual content and quality of a proposed patch, and studies of the effects of various biases (e.g., gender bias) on code review have just begun [6,14,40]. This paper focuses on code review from a human trust perspective.…”
Section: Introductionmentioning
confidence: 99%
“…The importance of code review has been emphasized both in software companies (e.g., Microsoft [10], Google [50,102], Facebook [94,104]) and open source projects [9,77]. While code review is widely used in quality assurance, developers that conduct these reviews are vulnerable to biases [27,91]. In this paper, we investigate objective sources and characterizations of biases during code review.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, another study using behavioral data on GitHub found that women concentrate their efforts on fewer projects and exhibit a narrower band of accepted behavior [43]. Furthermore, research has shown that developers may not even recognize the potential effects of biases of code authors when performing code reviews [27,91]. Such biases may not only decrease the quality of code reviews, but also the productivity of software development, especially in fields like software engineering that are dominated by men [40,78,105] despite (gender) diversity significantly positively influencing productivity [12,37,73,99].…”
Section: Introductionmentioning
confidence: 99%
“…However, pull requests in that study were proposed manually and only a single time after prior consultation with the project maintainers. Furthermore, we know that contributions are not only evaluated on their content, but also on the social characteristics of the contributor (Terrell et al, 2017;Ford et al, 2019). In the case of contributing bots, identifying them as bots can be sufficient to observe a negative bias compared to contributions from humans (Murgia et al, 2016).…”
Section: Introductionmentioning
confidence: 99%