The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses, and recent work has focused on improving the quality of automatically generated feedback. However, there is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback. This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance. The class is an intro to AI with programming HW assignments. One group of students received detailed computer-generated feedback on their programming assignments describing which parts of the algorithms' logic was missing; the other group additionally received human-written feedback describing how their programs' syntax relates to issues with their logic, and qualitative (style) recommendations for improving their code. Results on quizzes and exam questions suggest that human feedback helps students obtain a better conceptual understanding, but analyses found no difference between the groups' ability to collaborate on the final project. The course grade distribution revealed that students who received human-written feedback performed better overall; this effect was the most pronounced in the middle two quartiles of each group. These results suggest that feedback about the syntax-logic relation may be a primary mechanism by which human feedback improves student outcomes. CCS CONCEPTS • Social and professional topics → Computer science education; Computational thinking; Student assessment.
In order to make lifelike, versatile learning adaptive in the artificial domain, one needs a very diverse set of behaviors to learn. We propose a parameterized distribution of classic control-style tasks with minimal information shared between tasks. We discuss what makes a task trivial and offer a basic metric, time in convergence, that measures triviality. We then investigate analytic and empirical approaches to generating reward structures for tasks based on their dynamics in order to minimize triviality. Contrary to our expectations, populations evolved on reward structures that incentivized the most stable locations in state space spend the least time in convergence as we have defined it, because of the outsized importance our metric assigns to behavior fine-tuning in these contexts. This work paves the way towards an understanding of which task distributions enable the development of learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.