2023
DOI: 10.47408/jldhe.vi27.1009
|View full text |Cite
|
Sign up to set email alerts
|

On ChatGPT: what promise remains for multiple choice assessment?

Abstract: Multiple-choice quizzes (MCQs) are a popular form of assessment. A rapid shift to online assessment during the Covid-19 pandemic in 2020, drove the uptake of MCQs, yet limited invigilation and wide access to material on the internet allow students to solve the questions via internet search. ChatGPT, an artificial intelligence (AI) agent trained on a large language model, exacerbates this challenge as it responds to information retrieval questions with speed and a good level of accuracy. In this opinion piece, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Lastly, the study opens the door to a broader inquiry into the validity and reliability of MCQ-based assessments in higher education since passing these exams can be achieved by AI-models. Thus, refined approaches for effective design of MCQs is needed to maintain the reliability of MCQs as an assessment method in higher education (Gonsalves, 2023). Future studies are recommended taken into consideration the issues of rigorous design, variable tested subjects, different language and cultural aspects, and different exam settings.…”
Section: Discussionmentioning
confidence: 99%
“…Lastly, the study opens the door to a broader inquiry into the validity and reliability of MCQ-based assessments in higher education since passing these exams can be achieved by AI-models. Thus, refined approaches for effective design of MCQs is needed to maintain the reliability of MCQs as an assessment method in higher education (Gonsalves, 2023). Future studies are recommended taken into consideration the issues of rigorous design, variable tested subjects, different language and cultural aspects, and different exam settings.…”
Section: Discussionmentioning
confidence: 99%
“…The limitations of current language models can shed light for designing MCQs that promote active engagement and critical thinking. Strategies like incorporating multimedia, using conditional logic, grounding questions in current events, and meticulously crafting distractors can counteract the model's pattern recognition, making it more di cult for the model to simply identify answer choices based on statistical patterns (Gonsalves, 2023).…”
Section: Discussionmentioning
confidence: 99%
“…The literature identifies multiple solutions to the above problem. Gonsalves (2023) suggests that educators using multiple choice test for assessment may consider outsmarting the system by taking advantage of the current limitations of generative AI platforms, such as their inability to interpret visual media or understand up-to-date information beyond its training data. For instance, multiple choice questions could incorporate visual elements, such as images, figures, or charts, and require students to interact with these elements.…”
Section: Rethinking Assessment Designmentioning
confidence: 99%