This paper presents a detailed study of a form of academic dishonesty that involves the use of multiple accounts for harvesting solutions in a Massive Open Online Course (MOOC). It is termed CAMEO -Copying Answers using Multiple Existence Online. A person using CAMEO sets up one or more harvesting accounts for collecting correct answers; these are then submitted in the user's master account for credit. The study has three main goals: Determining the prevalence of CAMEO, studying its detailed characteristics, and inferring the motivation(s) for using it. For the physics course that we studied, about 10% of the certificate earners used this method to obtain more than 1% of their correct answers, and more than 3% of the certificate earners used it to obtain the majority (>50%) of their correct answers. We discuss two of the likely consequences of CAMEO: jeopardizing the value of MOOC certificates as academic credentials, and generating misleading conclusions in educational research. Based on our study, we suggest methods for reducing CAMEO. Although this study was conducted on a MOOC, CAMEO can be used in any learning environment that enables students to have multiple accounts.
The study presented in this paper deals with copying answers in MOOCs. Our findings show that a significant fraction of the certificate earners in the course that we studied have used what we call harvesting accounts to find correct answers that they later submitted in their main account, the account for which they earned a certificate. In total, ∼2.5% of the users who earned a certificate in the course obtained the majority of their points by using this method, and ∼10% of them used it to some extent. This paper has two main goals. The first is to define the phenomenon and demonstrate its severity. The second is characterizing key factors within the course that affect it, and suggesting possible remedies that are likely to decrease the amount of cheating. The immediate implication of this study is to MOOCs. However, we believe that the results generalize beyond MOOCs, since this strategy can be used in any learning environments that do not identify all registrants.
Evidence from various domains underlines the critical role that human factors, and especially trust, play in adopting technology by practitioners. In the case of Artificial Intelligence (AI) powered tools, the issue is even more complex due to practitioners' AI‐specific misconceptions, myths and fears (e.g., mass unemployment and privacy violations). In recent years, AI has been incorporated increasingly into K‐12 education. However, little research has been conducted on the trust and attitudes of K‐12 teachers towards the use and adoption of AI‐powered Educational Technology (AI‐EdTech). This paper sheds light on teachers' trust in AI‐EdTech and presents effective professional development strategies to increase teachers' trust and willingness to apply AI‐EdTech in their classrooms. Our experiments with K‐12 science teachers were conducted around their interactions with a specific AI‐powered assessment tool (termed AI‐Grader) using both synthetic and real data. The results indicate that presenting teachers with some explanations of (i) how AI makes decisions, particularly compared to the human experts, and (ii) how AI can complement and give additional strengths to teachers, rather than replacing them, can reduce teachers' concerns and improve their trust in AI‐EdTech. The contribution of this research is threefold. First, it emphasizes the importance of increasing teachers' theoretical and practical knowledge about AI in educational settings to gain their trust in AI‐EdTech in K‐12 education. Second, it presents a teacher professional development program (PDP), as well as the discourse analysis of teachers who completed it. Third, based on the results observed, it presents clear suggestions for future PDPs aiming to improve teachers' trust in AI‐EdTech. What is already known about this topic Human factors, and especially trust, play a critical role in practitioners' adoption of technology. In recent years, AI has been incorporated increasingly into K‐12 education. Little research has been conducted on the trust and attitudes of K‐12 teachers towards the use and adoption of AI‐powered Educational Technology. What this paper adds This research emphasizes the importance of increasing teachers' theoretical and practical knowledge about AI in educational settings to gain their trust in AI‐EdTech in K‐12 education. It presents a teacher professional development program (PDP) to increase teachers' trust in AI‐EdTech, as well as the discourse analysis of teachers who completed it. It presents clear suggestions for future PDPs aiming at improving teachers' trust in AI‐EdTech. Implications for practice and/or policy Pre‐ and in‐service teacher education programs that aim to increase teachers' trust in AI‐EdTech should include a section providing teachers with a basic understanding of AI. PDPs aimed to increase teachers' trust in AI‐EdTech should focus on concrete pedagogical tasks and specific AI‐powered tools that are considered by teachers as helpful and worth the effort to learn. AI‐EdTech should not restr...
One of the reported methods of cheating in online environments in the literature is CAMEO (Copying Answers using Multiple Existences Online), where harvesting accounts are used to obtain correct answers that are later submitted in the master account which gives the student credit to obtain a certificate. In previous research we developed an algorithm to identify and label submissions that were cheated using the CAMEO method; this algorithm relied on the IP of the submissions. In this study we use this tagged sample of submissions to i) compare the influence of student and problems characteristics on CAMEO and ii) build a random forest classifier that detects submissions as CAMEO without relying on IP, achieving sensitivity and specificity levels of 0.966 and 0.996, respectively. Finally, we analyze the importance of the different features of the model finding that student features are the most important variables towards the correct classification of CAMEO submissions, concluding also that student features have more influence on CAMEO than problem features.
The rich data that Massive Open Online Courses (MOOCs) platforms collect on the behavior of millions of users provide a unique opportunity to study human learning and to develop data-driven methods that can address the needs of individual learners. This type of research falls into the emerging field of learning analytics. However, learning analytics research tends to ignore the issue of the reliability of results that are based on MOOCs data, which is typically noisy and generated by a largely anonymous crowd of learners. This paper provides evidence that learning analytics in MOOCs can be significantly biased by users who abuse the anonymity and open-nature of MOOCs, for example by setting up multiple accounts, due to their amount and aberrant behavior. We identify these users, denoted fake learners, using dedicated algorithms. The methodology for measuring the bias caused by fake learners' activity combines the ideas of Replication Research and Sensitivity Analysis. We replicate two highlycited learning analytics studies with and without fake learners data, and compare the results. While in one study, the results were relatively stable against fake learners, in the other, removing the fake learners' data significantly changed the results. These findings raise concerns regarding the reliability of learning analytics in MOOCs, and highlight the need to develop more robust, generalizable and verifiable research methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.