Formative feedback, aimed at helping students to improve their work, is an important factor in learning. Many tools that offer programming exercises provide automated feedback on student solutions. We have performed a systematic literature review to find out what kind of feedback is provided, which techniques are used to generate the feedback, how adaptable the feedback is, and how these tools are evaluated. We have designed a labelling to classify the tools, and use Narciss’ feedback content categories to classify feedback messages. We report on the results of coding a total of 101 tools. We have found that feedback mostly focuses on identifying mistakes and less on fixing problems and taking a next step. Furthermore, teachers cannot easily adapt tools to their own needs. However, the diversity of feedback types has increased over the past decades and new techniques are being applied to generate feedback that is increasingly helpful for students.
Formative feedback, aimed at helping students to improve their work, is an important factor in learning. Many tools that offer programming exercises provide automated feedback on student solutions. We are performing a systematic literature review to find out what kind of feedback is provided, which techniques are used to generate the feedback, how adaptable the feedback is, and how these tools are evaluated. We have designed a labelling to classify the tools, and use Narciss' feedback content categories to classify feedback messages. We report on the results of the first iteration of our search in which we coded 69 tools. We have found that tools do not often give feedback on fixing problems and taking a next step, and that teachers cannot easily adapt tools to their own needs.
Because low quality code can cause serious problems in software systems, students learning to program should pay attention to code quality early. Although many studies have investigated mistakes that students make during programming, we do not know much about the quality of their code. This study examines the presence of quality issues related to program ow, choice of programming constructs and functions, clarity of expressions, decomposition and modularization in a large set of student Java programs. We investigated which issues occur most frequently, if students are able to solve these issues over time and if the use of code analysis tools has an e ect on issue occurrence. We found that students hardly x issues, in particular issues related to modularization, and that the use of tooling does not have much e ect on the occurrence of issues.
Educational research has established that learning can be defined as an enduring change in behaviour, which results from practice or other forms of experience. In introductory programming courses, proficiency is typically approximated through relatively small but frequent assignments and tests. Scaling these assessments to track significant behavioural change is challenging due to the subtle and complex metrics that must be collected from large student populations. Based on a four-semester study, we present an analysis of learning tool interaction data collected from 514 students and 38,796 solutions to practice programming exercises. We first evaluate the effectiveness of measuring workflow patterns to detect students at-risk of failure within the first three weeks of the semester. Our early predictor analysis accurately detects 81% of the students who struggle throughout the course. However, our early predictor also captures transient struggling, as 43% of the students who ultimately did well in the course were classified as at-risk. In order to better differentiate sustained versus transient struggling, we further propose a trajectory metric which measures changes in programming behaviour. The trajectory metric detects 70% of the students who exhibit sustained struggling, and mis-classifies only 11% of students who go on to succeed in the course. Overall, our results show how detecting changes in programming behaviour can help us differentiate between learning and struggling in CS1.
Code quality has been receiving less attention than program correctness in both the practice of and research into programming education. Writing poor quality code might be a sign of carelessness, or not fully understanding programming concepts and language constructs. Teachers play an important role in addressing quality issues, and encouraging students to write better code as early as possible.In this paper we explore to what extent teachers address code quality in their teaching, which code quality issues they observe and how they would help novices to improve their code. We presented student code of low quality to 30 experienced teachers and asked them which hints they would give and how the student should improve the code step by step. We compare these hints to the output of professional code quality tools.Although most teachers gave similar hints on reducing the algorithmic complexity and removing clutter, they gave varying subsets of hints on other topics. We found a large variety in how they would solve issues in code. We noticed that professional code quality tools do not point out the algorithmic complexity topics that teachers mention. Finally, we give some general guidelines on how to approach code improvement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.