Popularity of online courses with open access and unlimited student participation, the so-called massive open online courses (MOOCs), has been growing intensively. Students, professors, and universities have an interest in accurate measures of students' proficiency in MOOCs. However, these measurements face several challenges: (a) assessments are dynamic: items can be added, removed or replaced by a course author at any time; (b) students may be allowed to make several attempts within one assessment; (c) assessments may include an insufficient number of items for accurate individual-level conclusions. Therefore, common psychometric models and techniques of Classical Test Theory (CTT) and Item Response Theory (IRT) do not serve perfectly to measure proficiency. In this study we try to cover this gap and propose cross-classification multilevel logistic extensions of the common IRT model, the Rasch model, aimed at improving the assessment of the student's proficiency by modeling the effect of attempts and by involving non-assessment data such as student's interaction with video lectures and practical tasks. We illustrate these extensions on the logged data from one MOOC and check the quality using a cross-validation procedure on three MOOCs. We found that (a) the performance changes over attempts depend on the student: whereas for some students performance ameliorates, for other students, the performance might deteriorate; (b) similarly, the change over attempts varies over items; (c) student's activity with video lectures and practical tasks are significant predictors of response correctness in a sense of higher activity leads to higher chances of a correct response; (d) overall accuracy of prediction of student's item responses using the extensions is 6% higher than using the traditional Rasch model. In sum, our results show that the approach is an improvement in assessment procedures in MOOCs and could serve as an additional source for accurate conclusions on student's proficiency.
Massive open online courses (MOOCs) are increasingly popular among students of various ages and at universities around the world. The main aim of a MOOC is growth in students' proficiency. That is why students, professors, and universities are interested in the accurate measurement of growth. Traditional psychometric approaches based on item response theory (IRT) assume that a student's proficiency is constant over time, and therefore are not well suited for measuring growth. In this study we sought to go beyond this assumption, by (a) proposing to measure two components of growth in proficiency in MOOCs; (b) applying this idea in two dynamic extensions of the most common IRT model, the Rasch model; (c) illustrating these extensions through analyses of logged data from three MOOCs; and (d) checking the quality of the extensions using a cross-validation procedure. We found that proficiency grows both across whole courses and within learning objectives. In addition, our dynamic extensions fit the data better than does the original Rasch model, and both extensions performed well, with an average accuracy of .763 in predicting students' responses from real MOOCs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.