Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education 2016
DOI: 10.1145/2899415.2899443
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Grading of Programming Exercises using Property-Based Testing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…The last decade underwent a slight shift in what is considered to be a correct, partially correct, and incorrect solution by most practitioners. First, producing the expected output is not by itself the only factor that weighs in the grade of the program [27]. The adopted strategy, quality of the source code, algorithmic complexity, and…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The last decade underwent a slight shift in what is considered to be a correct, partially correct, and incorrect solution by most practitioners. First, producing the expected output is not by itself the only factor that weighs in the grade of the program [27]. The adopted strategy, quality of the source code, algorithmic complexity, and…”
Section: Discussionmentioning
confidence: 99%
“…1 Learning progress graphs 2 Code size variation, code save progression, syntactic analytics, compilation error timeline, execution sequence 3 Student behavior and question analytics many other aspects are also relevant indicators and, thus, may afect grades [27] and originate feedback to improve learning and promote good practices [238]. Then, in the limit, a program with a syntax error might be closer to the solution than a program failing a single test case.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses to accommodate for increased enrollment in traditional CS programs as well as online courses. Several authors have addressed automatic grading platforms [1,3,4,[7][8][9]12]. Many of the systems used so far have been focused on binary criteria: Does the program run?, does the program produce the right output for problem 1?, does the program produce the right output for problem 2?, etc.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, much of the current literature assesses grading approaches according to the similarity of grades and feedback to human feedback [1,4,6,12], rather than by student outcomes. Given the extent to which automatic grading has become commonplace, it is natural to ask what (if anything) is lost in the real world by replacing human grading with automatic feedback.…”
Section: Introductionmentioning
confidence: 99%