Purpose
This paper aims to appraise current challenges in adopting generative AI by reviewers to evaluate the readability and quality of submissions. The paper discusses how to make the AI-powered peer-review process immune to unethical practices, such as the proliferation of AI-generated poor-quality or fake reviews that could harm the value of peer review.
Design/methodology/approach
This paper examines the potential roles of AI in peer review, the challenges it raises and their mitigation. It critically appraises current opinions and practices while acknowledging the lack of consensus about best practices in AI for peer review.
Findings
The adoption of generative AI by the peer review process seems inevitable, but this has to happen (1) gradually, (2) under human supervision, (3) by raising awareness among stakeholders about all its ramifications, (4) through improving transparency and accountability, (5) while ensuring confidentiality through the use of locally hosted AI systems, (6) by acknowledging its limitations such as its inherent bias and unawareness of up-to-date knowledge, (7) by putting in place robust safeguards to maximize its benefits and lessen its potential harm and (8) by implementing a robust quality assurance to assess its impact on overall research quality.
Originality/value
In the current race for more AI in scholarly communication, this paper advocates for a human-centered oversight of AI-powered peer review. Eroding the role of humans will create an undesirable situation where peer review will gradually metamorphose into an awkward conversation between an AI writing a paper and an AI evaluating that paper.