Badge-based achievement systems are being used increasingly to drive user participation and engagement across a variety of platforms and contexts. Despite positive anecdotal reports, there is currently little empirical evidence to support their efficacy in particular domains. With the recent rapid growth of tools for online learning, an interesting open question for educators is the extent to which badges can positively impact student participation.In this paper, we report on a large-scale (n > 1000) randomized, controlled experiment measuring the impact of incorporating a badge-based achievement system within an online learning tool. We discover a highly significant positive effect on the quantity of students' contributions, without a corresponding reduction in their quality, as well as on the period of time over which students engaged with the tool. Students enjoyed being able to earn badges, and indicated a strong preference for having them available in the user interface.
Identifying and correcting syntax errors is a challenge all novice programmers confront. As educators, the more we understand about the nature of these errors and how students respond to them, the more effective our teaching can be. It is well known that just a few types of errors are far more frequently encountered by students learning to program than most. In this paper, we examine how long students spend resolving the most common syntax errors, and discover that certain types of errors are not solved any more quickly by the higher ability students. Moreover, we note that these errors consume a large amount of student time, suggesting that targeted teaching interventions may yield a significant payoff in terms of increasing student productivity.
Debugging is an important skill for novice programmers to acquire. Error messages help novices to locate and correct errors, but compiler messages are frequently inadequate. We have developed a system that provides enhanced error messages, including concrete examples that illustrate the kind of error that has occurred and how that kind of error could be corrected. We evaluate the effectiveness of the enhanced error messages with a controlled empirical study and find no significant effect.
A participatory learning approach, combined with both a traditional and a competitive assessment, was used to motivate students and promote a deep approach to learning biochemistry. Students were challenged to research, author, and explain their own multiple-choice questions (MCQs). They were also required to answer, evaluate, and discuss MCQs written by their peers. The technology used to support this activity was PeerWise-a freely available, innovative web-based system that supports students in the creation of an annotated question repository. In this case study, we describe students' contributions to, and perceptions of, the PeerWise system for a cohort of 107 second-year biomedical science students from three degree streams studying a core biochemistry subject. Our study suggests that the students are eager participants and produce a large repository of relevant, good quality MCQs. In addition, they rate the PeerWise system highly and use higher order thinking skills while taking an active role in their learning. We also discuss potential issues and future work using PeerWise for biomedical students.
Common exam practice centres around two question types: code tracing (reading) and code writing. It is commonly believed that code tracing is easier than code writing, but it seems obvious that different skills are needed for each. These problems also differ in their value on an exam. Pedagogically, code tracing on paper is an authentic task whereas code writing on paper is less so. Yet, few instructors are willing to forgo the code writing question on an exam. Is there another way, easier to grade, that captures the "problem solving through code creation process" we wish to examine? In this paper we propose Parson's puzzle-style problems for this purpose. We explore their potential both qualitatively, through interviews, and quantitatively through a set of CS1 exams. We find notable correlation between Parsons scores and code writing scores. We find low correlation between code writing and tracing and between Parsons and tracing. We also make the case that marks from a Parsons problem make clear what students don't know (specifically, in both syntax and logic) much less ambiguously than marks from a code writing problem. We make recommendations on the design of Parsons problems for the exam setting, discuss their potential uses and urge further investigations of Parsons problems for assessment of CS1 students.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.