2010
DOI: 10.1087/20100207
|View full text |Cite
|
Sign up to set email alerts
|

Reliability of reviewers' ratings when using public peer review: a case study

Abstract: If a manuscript meets scientific standards and contributes to the advancement of science, it can be expected that two or more reviewers will agree on its value. Manuscripts are rated reliably when there is a high level of agreement between independent reviewers. This study investigates for the first time whether inter‐rater reliability, which is low with the traditional model of closed peer review, is also low with the new system of public peer review or whether higher coefficients can be found for public peer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(18 citation statements)
references
References 21 publications
0
18
0
Order By: Relevance
“…Van Rooyen, Delamothe, and Evans (2010) found that telling reviewers that their signed reviews might be published (on the web) did not affect the quality of reviews, but it increased the time spent on composing reviews. Bornmann and Daniel (2010) analyzed public reviews submitted to the journal Atmospheric Physics and Chemistry and found that the level of inter-reviewer reliability was low, comparable to that of traditional peer-review processes. Bingham, Higgins, Coleman, and Van Der Weyden (1998) found that postpublication reviews by online readers can provide valuable feedback, but those reviews are often short and specific, and thus do not adequately replace full editorial peer review.…”
Section: Anonymity In Peer Reviewmentioning
confidence: 99%
“…Van Rooyen, Delamothe, and Evans (2010) found that telling reviewers that their signed reviews might be published (on the web) did not affect the quality of reviews, but it increased the time spent on composing reviews. Bornmann and Daniel (2010) analyzed public reviews submitted to the journal Atmospheric Physics and Chemistry and found that the level of inter-reviewer reliability was low, comparable to that of traditional peer-review processes. Bingham, Higgins, Coleman, and Van Der Weyden (1998) found that postpublication reviews by online readers can provide valuable feedback, but those reviews are often short and specific, and thus do not adequately replace full editorial peer review.…”
Section: Anonymity In Peer Reviewmentioning
confidence: 99%
“…These studies typically use several different approaches to gather evidence on the functionality of peer review. Some, such as Bornmann & Daniel (2010b); Daniel (1993); Zuckerman & Merton (1971), used access to journal editorial archives to calculate acceptances, assess inter-reviewer agreement, and compare acceptance rates to various article, topic, and author features. Others interviewed or surveyed authors, reviewers, and editors to assess attitudes and behaviours, while others conducted randomized controlled trials to assess aspects of peer review bias ( Justice et al , 1998; Overbeke, 1999).…”
Section: 01 Methodsmentioning
confidence: 99%
“…Rigorous, evidence-based research on peer review itself is surprisingly lacking across many research domains, and would help to build our collective understanding of the process and guide the design of ad-hoc solutions ( Bornmann & Daniel, 2010b; Bruce et al , 2016; Rennie, 2016; Jefferson et al , 2007). Such evidence is needed to form the basis for implementing guidelines and standards at different journals and research communities, and making sure that editors, authors, and reviewers hold each other reciprocally accountable to them.…”
Section: A Hybrid Peer Review Platformmentioning
confidence: 99%
“…It has also been argued that OPR allows for easier identification of scientific misconduct ( Boldt, 2011), and that over time the quality of submitted articles will improve ( Hu et al , 2010; Prug, 2010). OPR affords referees the ability to gain credit for and cite their contributions to science communication ( Boldt, 2011; Bornmann & Daniel, 2010; Fitzpatrick, 2010; Prug, 2010; Pöschl, 2009). More broadly speaking, OPR provides the scholarly community an insight into author/referee conversations during the review process.…”
Section: Why Open Peer Review?mentioning
confidence: 99%