Context: Software code reviews are an important part of the development process, leading to better software quality and reduced overall costs. However, finding appropriate code reviewers is a complex and time-consuming task. Goals: In this paper, we propose a large-scale study to compare performance of two main source code reviewer recommendation algorithms (RevFinder and a Naïve Bayes-based approach) in identifying the best code reviewers for opened pull requests. Method: We mined data from Github and Gerrit repositories, building a large dataset of 51 projects, with more than 293K pull requests analyzed, 180K owners and 157K reviewers. Results: Based on the large analysis, we can state that i) no model can be generalized as best for all projects, ii) the usage of a different repository (Gerrit, GitHub) can have impact on the the recommendation results, iii) exploiting sub-projects information available in Gerrit can improve the recommendation results.• provision of a large dataset of 51 projects mined from Gerrit (14) and Github (37) with 293,337 total pull requests analyzed, considering 180,111 owners and 157,885 reviewers. The dataset is available on Figshare [24]. • comparison of the results in the two repositories (Gerrit, GitHub) using both RevFinder and Naïve Bayes-based approaches in the context of the 51-projects dataset.The article is structured as follows. In section II, we propose the background on several algorithms for code reviewer recommendation. In section III we have the experimental study design, with research questions, context, data analysis and replicability information. In section IV, we answer the research questions with discussions and threats to validity. Section V proposes the related works and section VI the conclusions.