2023
DOI: 10.1057/s41307-023-00323-2
|View full text |Cite
|
Sign up to set email alerts
|

Decoding Academic Integrity Policies: A Corpus Linguistics Investigation of AI and Other Technological Threats

Abstract: This study provides a corpus analysis of academic integrity policies from Higher Education Institutions (HEIs) worldwide, exploring how they address the emerging issues posed by novel technological threats, such as Automated Paraphrasing Tools (APTs) and Generative-Artificial Intelligence (Gen-AI) tools, such as ChatGPT.The analysis of 142 policies conducted in both November 2022 and May 2023 revealed a significant gap regarding the mention of AI and associated technologies in publicly available academic integ… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 61 publications
0
9
0
Order By: Relevance
“…A recent study found evidence of a clear absence of clarity regarding the use of ChatGPT and similar GenAI tools in academic policies. Out of 142 HEIs surveyed in May 2023, only one explicitly prohibited the use of AI [43]. This is an important finding as there is evidence to suggest that academic dishonesty is inversely related to understanding and acceptance of academic integrity policies [44,45].…”
Section: Challengesmentioning
confidence: 97%
“…A recent study found evidence of a clear absence of clarity regarding the use of ChatGPT and similar GenAI tools in academic policies. Out of 142 HEIs surveyed in May 2023, only one explicitly prohibited the use of AI [43]. This is an important finding as there is evidence to suggest that academic dishonesty is inversely related to understanding and acceptance of academic integrity policies [44,45].…”
Section: Challengesmentioning
confidence: 97%
“…AI-generated submissions to academic journals have been described as a ‘coming tsunami’ ( Tate et al , 2023 ) and it has been claimed that publishers need to anticipate the potential of wholly AI-generated submissions ( Anderson et al , 2023 ). To date, much of the focus in academia has been on the use of GenAI tools such as ChatGPT, thanks to their human-like text production capabilities ( Perkins, 2023a ; Perkins & Roe, 2023 ), although other AI tools such as Elicit have also shown promise in summarising literature and identifying source material ( Roe et al , 2023 ). Such capabilities have fuelled the debate on the positive and negative impacts of GenAI on scholarly work.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The rapid advancement of AI technologies and their increasing application in various domains has underscored the need for comprehensive policy analysis and ethical considerations. Despite the widespread use of digital and AI technologies, only a small percentage of higher education institutions (HEIs) have developed formal policies surrounding their use since the launch of ChatGPT ( Perkins & Roe, 2023 ; Xiao et al , 2023 ). Just as HEIs had to quickly develop policies and guidelines on students’ use of GenAI tools, academic publishers were also forced to consider how to manage the use of GenAI tools by authors.…”
Section: Introductionmentioning
confidence: 99%
“…As a potential solution, we propose an AIAS in which educational institutions can adapt to their needs. The AIAS is a response to these broader concerns, amid calls to delineate the appropriate use of GenAI tools in education (Perkins & Roe, 2023b), design curricula with GenAI in mind (Bahroun et al, 2023), set clear guidelines for when and how GenAI can be used (Cotton et al, 2023), and support transparency in GenAI usage (Perkins & Roe, 2023a). Given that few global HEIs have developed clear policies for AI, let alone the more specific and novel field of Generative AI (Perkins & Roe, 2023b;Xiao et al, 2023), being able to employ a practical technique that fits within the wider constraints of a broader HE Institution policy is potentially of significant benefit to educators and students.…”
Section: The Future Of He Assessmentmentioning
confidence: 99%
“…We recognise that providing a scale-based solution for GenAI tool usage needs additional context and urge HEIs to continue developing GenAI policies and student-facing guidelines that are flexible enough to cover the rapidly developing field while still allowing for the core elements of academic integrity to be considered. Recent work has demonstrated the slow speed of HEIs in creating formal policy documentation (Fowler et al, 2023;Perkins & Roe, 2023b;Xiao et al, 2023); however, guidelines and supporting multimedia content can be an effective way to provide additional context to how GenAI tools might be used in a safe and ethical manner. These guidelines may cover areas such as the ethics of GenAI tool usage, explaining how these tools can be cited and used in a transparent manner, exploring the limitations and biases of GenAI tools, and addressing security and privacy concerns.…”
Section: Supporting Guidelines For Ai Use In Assessmentmentioning
confidence: 99%