2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS) 2020
DOI: 10.1109/qrs51102.2020.00069
|View full text |Cite
|
Sign up to set email alerts
|

Is There A "Golden" Rule for Code Reviewer Recommendation? : —An Experimental Evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 38 publications
2
1
0
Order By: Relevance
“…When small samples are taken as the historical review data, the first several iterations of the incremental sampling process may have relatively low performance and then decrease the final average top-k accuracy. This conjecture corroborates the recent findings by Hu et al [11] that the investigated code reviewer approaches are sensitive to training data on evaluation metrics.…”
Section: Neutronsupporting
confidence: 92%
See 1 more Smart Citation
“…When small samples are taken as the historical review data, the first several iterations of the incremental sampling process may have relatively low performance and then decrease the final average top-k accuracy. This conjecture corroborates the recent findings by Hu et al [11] that the investigated code reviewer approaches are sensitive to training data on evaluation metrics.…”
Section: Neutronsupporting
confidence: 92%
“…In general, the results show that combining three similarity detection methods can relatively get the best performance of top-k accuracy and MRR on the four projects. Discussion of RQ1: The experiment results in Table 2 indicate that the selected similarity detection methods can produce acceptable performance scores (MRR values between 0.06 and 0.36) on code reviewer recommendation for architecture violations on the four projects, compared to the results (MRR values between 0.14 and 0.59) of related studies on generic reviewer recommendation (e.g., [7,11]) with more reviewer candidates (which means potentially better performance due to the larger datasets). Besides, we observed that the similarity detection methods can achieve varying performances on different OSS projects.…”
Section: Results and Discussion 41 Rq1: Effectiveness Of Our Approachmentioning
confidence: 95%
“…When small samples are taken as the historical review data, the first several iterations of the incremental sampling process may have relatively low performance and then decrease the final average top-k accuracy. This conjecture corroborates the recent findings by Hu et al (Hu et al, 2020) that the investigated code reviewer approaches are sensitive to training data on evaluation metrics.…”
Section: Rq3: Comparison Of Sampling Methodssupporting
confidence: 91%