Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale 2017
DOI: 10.1145/3051457.3051467
|View full text |Cite
|
Sign up to set email alerts
|

Writing Reusable Code Feedback at Scale with Mixed-Initiative Program Synthesis

Abstract: In large introductory programming classes, teacher feedback on individual incorrect student submissions is often infeasible. Program synthesis techniques are capable of fixing student bugs and generating hints automatically, but they lack the deep domain knowledge of a teacher and can generate functionally correct but stylistically poor fixes. We introduce a mixedinitiative approach which combines teacher expertise with data-driven program synthesis techniques. We demonstrate our novel approach in two systems … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
0
5

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 106 publications
(67 citation statements)
references
References 22 publications
0
62
0
5
Order By: Relevance
“…The extension of our approach to larger programming problems, as found in more advanced courses, is left for future work. Focusing on small to medium size programs is in line with related work on automated feedback generation for introductory programming (e.g., D'Antoni et al [9], Singh et al [32], Head et al [19]). We stress that the state-of-the-art in teaching is manual feedback (as well as failing test cases); thus, automation, even for small to medium size programs, promises huge benefits.…”
Section: Threats To Validitymentioning
confidence: 61%
See 2 more Smart Citations
“…The extension of our approach to larger programming problems, as found in more advanced courses, is left for future work. Focusing on small to medium size programs is in line with related work on automated feedback generation for introductory programming (e.g., D'Antoni et al [9], Singh et al [32], Head et al [19]). We stress that the state-of-the-art in teaching is manual feedback (as well as failing test cases); thus, automation, even for small to medium size programs, promises huge benefits.…”
Section: Threats To Validitymentioning
confidence: 61%
“…To avoid the threat that a student's attempt is repaired by her own future correct solution, we split the data into two sets. 19.7s (6.3s) From the first (chronologically earlier) set we take only the correct solutions: these solutions are then clustered, and the obtained clusters are used during the repair of the incorrect attempts. From the second (chronologically later) set we take only the incorrect attempts: on these attempts we perform repair.…”
Section: Mooc Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…A focus shift from automated grading to automated feedback was witnessed in the most recent decade. Specifically, the focus was on feedback generation through data-driven approaches [17], [18]. Massive Open Online Programming Courses provided large datasets of programming assignments, which is critical to make such approaches possible.…”
Section: Related Work a Automated Grading And Feedbackmentioning
confidence: 99%
“…[0, 2, 4, 6, 8, 10] and [0, 1, 3, 6, 10, 15]). The other variables have the same update sequences (e.g., i is the same sequence of updates [1,2,3,4,5,6] for both incorrect and fixed code trace). This technique allows us to extract only traces for the key variables the student should pay attention to.…”
Section: Trace Differencingmentioning
confidence: 99%