Multiple-choice quizzes (MCQs) are a popular form of assessment. A rapid shift to online assessment during the Covid-19 pandemic in 2020, drove the uptake of MCQs, yet limited invigilation and wide access to material on the internet allow students to solve the questions via internet search. ChatGPT, an artificial intelligence (AI) agent trained on a large language model, exacerbates this challenge as it responds to information retrieval questions with speed and a good level of accuracy. In this opinion piece, I contend that while the place of MCQ in summative assessment may be uncertain, current shortcomings of ChatGPT offer opportunities for continued formative use. I outline how ChatGPT’s limitations can inform effective question design. I provide tips for effective multiple-choice question design and outline implications for both academics and learning developers. This piece contributes to emerging debate on the impact of artificial intelligence on assessment in higher education. Its purpose is threefold: to (1) enhance academics’ understanding of effective MCQ design, (2) promote shared understanding and inform dialogue between academics and learning developers about MCQ assessment, and (3) highlight the potential implications on learning support.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.