Peer review is the main institution responsible for the evaluation and gestation of scientific research. Although peer review is widely seen as vital to scientific evaluation, anecdotal evidence abounds of gatekeeping mistakes in leading journals, such as rejecting seminal contributions or accepting mediocre submissions. Systematic evidence regarding the effectiveness-or lack thereof-of scientific gatekeeping is scant, largely because access to rejected manuscripts from journals is rarely available. Using a dataset of 1,008 manuscripts submitted to three elite medical journals, we show differences in citation outcomes for articles that received different appraisals from editors and peer reviewers. Among rejected articles, desk-rejected manuscripts, deemed as unworthy of peer review by editors, received fewer citations than those sent for peer review. Among both rejected and accepted articles, manuscripts with lower scores from peer reviewers received relatively fewer citations when they were eventually published. However, hindsight reveals numerous questionable gatekeeping decisions. Of the 808 eventually published articles in our dataset, our three focal journals rejected many highly cited manuscripts, including the 14 most popular; roughly the top 2 percent. Of those 14 articles, 12 were deskrejected. This finding raises concerns regarding whether peer review is ill-suited to recognize and gestate the most impactful ideas and research. Despite this finding, results show that in our case studies, on the whole, there was value added in peer review. Editors and peer reviewers generally-but not always-made good decisions regarding the identification and promotion of quality in scientific manuscripts.peer review | innovation | decision making | publishing | creativity P eer review alters science via the filtering out of rejected manuscripts and the revision of eventually published articles. Publication in leading journals is linked to professional rewards in science, which influences the choices scientists make with their work (1). Although peer review is widely cited as central to academic evaluation (2, 3), numerous scholars have expressed concern about the effectiveness of peer review, particularly regarding the tendency to protect the scientific status quo and suppress innovative findings (4, 5). Others have focused on errors of omission in peer review, offering anecdotes of seminal scientific innovations that faced emphatic rejections from high-status gatekeepers and journals before eventually achieving publication and positive regard (6-8). Unfortunately, systematic study of peer review is difficult, largely because of the sensitive and confidential nature of the subject matter. Based on a dataset of 1,008 manuscripts submitted to three leading medical journals-Annals of Internal Medicine, British Medical Journal, and The Lancet-we analyzed the effectiveness of peer review. In our dataset, 946 submissions were rejected and 62 were accepted. Among the rejections, we identified 757 manuscripts eventually publis...