1985
DOI: 10.1177/016224398501000306
|View full text |Cite
|
Sign up to set email alerts
|

Experience with NIH Peer Review: Researchers' Cynicism and Desire for Change

Abstract: In the United States, peer review is central to the process by which many government agencies select research proposals for funding. Although several different agency versions of peer review are practiced,2 they share one characteristic: Scientists judge both the potential value of proposed research projects and the ability of proposers to perform the studies. The premise is self-fulfilling. If scientists identified as peer specialists are best qualified to judge the scientific merit of proposals, then peer re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
21
0

Year Published

1993
1993
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(21 citation statements)
references
References 3 publications
0
21
0
Order By: Relevance
“…For the National Institute of Handicapped Research (now National Institute on Disability and Rehabilitation Research, NIDRR), a survey of applicants' opinions found that 41 percent of respondents did not agree with the following statement: “I think that as a whole, the peer reviewer comments were fair” (Fuhrer & Grabois, 1985, p. 319). In a survey of applicants for grants from the National Cancer Institute (NCI), 40 percent of applicants found that reviewers were biased against researchers in minor universities or institutions in certain regions of the U.S. (Gillespie, Chubin, & Kurzon, 1985).…”
Section: Reliability Fairness and Predictive Validity Of Peer Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…For the National Institute of Handicapped Research (now National Institute on Disability and Rehabilitation Research, NIDRR), a survey of applicants' opinions found that 41 percent of respondents did not agree with the following statement: “I think that as a whole, the peer reviewer comments were fair” (Fuhrer & Grabois, 1985, p. 319). In a survey of applicants for grants from the National Cancer Institute (NCI), 40 percent of applicants found that reviewers were biased against researchers in minor universities or institutions in certain regions of the U.S. (Gillespie, Chubin, & Kurzon, 1985).…”
Section: Reliability Fairness and Predictive Validity Of Peer Reviewmentioning
confidence: 99%
“…Further surveys showed similar results regarding the perceived fairness of peer review; these were published by Chubin and Hackett (1990), McCullough (1989, 1994), and Resnik, Gutierrez‐Ford, and Peddada (2008). Predictably there is a close connection between satisfaction with the peer review process and both one's own success in a review and one's participation in a satisfaction survey: Successful applicants and authors are more satisfied and more often take part in surveys than unsuccessful applicants (Gillespie et al, 1985).…”
Section: Reliability Fairness and Predictive Validity Of Peer Reviewmentioning
confidence: 99%
“…Social and publication biases: Although often idealized as impartial, objective assessors, in reality studies suggest that peer reviewers may be subject to social biases on the grounds of gender ( Budden et al , 2008; Lloyd, 1990; Tregenza, 2002), nationality ( Daniel, 1993; Ernst & Kienbacher, 1991; Link, 1998), institutional affiliation ( Dall’Aglio, 2006; Gillespie et al , 1985; Peters & Ceci, 1982), language ( Cronin, 2009; Ross et al , 2006; Tregenza, 2002) and discipline ( Travis & Collins, 1991). Other studies suggest so-called “publication bias”, where prejudices against specific categories of works shape what is published.…”
Section: Introductionmentioning
confidence: 99%
“…Such studies have drawn on interviews with various agencies’ staff (Cole, Rubin and Cole 1978; Chubin and Hackett 1990) or with reviewers themselves (Hackett 1987; Lamont 2009); surveys of grant applicants (Gillespie, Chubin and Kurzon 1985; McCullough 1989); quantitative analyses of scores and funding outcomes (Klahr 1985; Sigelman and Scioli 1987; Wenneras and Wold 1997); and de-identified written critiques of funded applications (Porter and Rossini 1985). Such studies rely on post hoc data, with little attention paid to what goes on during review panel meetings; indeed, Olbrecht and Bornmann (2010) identified only five empirical studies that examined the judgment processes of peer review panels.…”
mentioning
confidence: 99%