Assigning authorship and recognizing contributions to scholarly works is challenging on many levels. Here we discuss ethical, social, and technical challenges to the concept of authorship that may impede the recognition of contributions to a scholarly work. Recent work in the field of authorship shows that shifting to a more inclusive contributorship approach may address these challenges. Recent efforts to enable better recognition of contributions to scholarship include the development of the Contributor Role Ontology (CRO), which extends the CRediT taxonomy and can be used in information systems for structuring contributions. We also introduce the Contributor Attribution Model (CAM), which provides a simple data model that relates the contributor to research objects via the role that they played, as well as the provenance of the information. Finally, requirements for the adoption of a contributorship-based approach are discussed.
Retractions solicited by authors following the discovery of an unintentional error-what we henceforth call a "self-retraction"-are a new phenomenon of growing importance, about which very little is known. Here we present results of a small qualitative study aimed at gaining preliminary insights about circumstances, motivations and beliefs that accompanied the experience of a self-retraction. We identified retraction notes that unambiguously reported an honest error and that had been published between the years 2010 and 2015. We limited our sample to retractions with at least one co-author based in the Netherlands, Belgium, United Kingdom, Germany or a Scandinavian country, and we invited these authors to a semi-structured interview. Fourteen authors accepted our invitation. Contrary to our initial assumptions, most of our interviewees had not originally intended to retract their paper. They had contacted the journal to request a correction and the decision to retract had been made by journal editors. All interviewees reported that having to retract their own publication made them concerned for their scientific reputation and career, often causing considerable stress and anxiety. Interviewees also encountered difficulties in communicating with the journal and recalled other procedural issues that had unnecessarily slowed down the process of self-retraction. Intriguingly, however, all interviewees reported how, contrary to their own expectations, the self-retraction had brought no damage to their reputation and in some cases had actually improved it. We also examined the ethical motivations that interviewees ascribed, retrospectively, to their actions and found that such motivations included a combination of moral and prudential (i.e. pragmatic) considerations. These preliminary results suggest that scientists would welcome innovations to facilitate the process of self-retraction.
Background The emergence of systems based on large language models (LLMs) such as OpenAI’s ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks. Methods To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers’ role, 2) editors’ role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT’s performance regarding identified issues. Results LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs’ training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing. Conclusions We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports’ accuracy, tone, reasoning and originality.
There is no clear-cut boundary between Free and Open Source Software and Open Scholarship, and the histories, practices, and fundamental principles between the two remain complex. In this study, we critically appraise the intersections and differences between the two movements. Based on our thematic comparison here, we conclude several key things. First, there is substantial scope for new communities of practice to form within scholarly communities that place sharing and collaboration/open participation at their focus. Second, Both the principles and practices of FOSS can be more deeply ingrained within scholarship, asserting a balance between pragmatism and social ideology. Third, at the present, Open Scholarship risks being subverted and compromised by commercial players. Fourth, the shift and acceleration towards a system of Open Scholarship will be greatly enhanced by a concurrent shift in recognising a broader range of practices and outputs beyond traditional peer review and research articles. In order to achieve this, we propose the formulation of a new type of institutional mandate. We believe that there is substantial need for research funders to invest in sustainable open scholarly infrastructure, and the communities that support them, to avoid the capture and enclosure of key research services that would prevent optimal researcher behaviours. Such a shift could ultimately lead to a healthier scientific culture, and a system where competition is replaced by collaboration, resources (including time and people) are shared and acknowledged more efficiently, and the research becomes inherently more rigorous, verified, and reproducible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.