2021
DOI: 10.1007/978-3-030-80421-3_33
|View full text |Cite
|
Sign up to set email alerts
|

Automated Assessment of Learning Objectives in Programming Assignments

Abstract: Individual feedback is a core ingredient of a personalised learning path. However, it also is time-intensive and, as a teaching form, it is not easily scalable. In order to make individual feedback realisable for larger groups of students, we develop tool support for teaching assistants to use in the process of giving feedback. In this paper, we introduce Apollo, a tool that automatically analyses code uploaded by students with respect to their progression towards the learning objectives of the course. First, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…While Chow et al focused on generating personalised hints for Python assignments, other authors focused on automated grading of Java. Rump et al [170] developed a tool to aid teaching assistants when manually reviewing submissions by generating a report assessing how well the student performed against a set of learning objectives. Similarly, Dil and Osunde [109]'s tool aimed to aid graders with grading the correctness of methodology; explicitly do students' submissions contain the required methods or fields rather than the correctness of the functionality.…”
Section: Languages Evaluated (Rq2)mentioning
confidence: 99%
See 1 more Smart Citation
“…While Chow et al focused on generating personalised hints for Python assignments, other authors focused on automated grading of Java. Rump et al [170] developed a tool to aid teaching assistants when manually reviewing submissions by generating a report assessing how well the student performed against a set of learning objectives. Similarly, Dil and Osunde [109]'s tool aimed to aid graders with grading the correctness of methodology; explicitly do students' submissions contain the required methods or fields rather than the correctness of the functionality.…”
Section: Languages Evaluated (Rq2)mentioning
confidence: 99%
“…Only six papers evaluate the feedback by comparing the automatically generated feedback to human-generated feedback [74,113,119,144,170,183]. Table 3 shows the count of tools by different evaluation techniques that involved comparing the results from the AAT with a human and the authors' sentiment of the performance of the AAT.…”
Section: Performance Against Human Graders (Rq4 Rq5)mentioning
confidence: 99%
“…Another interesting conclusion of this work is that in general the APA features can be organized according to whether they need execution of the program; dynamic analysis or can be statically evaluated from the program code; static analysis. Both of these approaches present undeniable advantages [18], [28], but also some major drawbacks. Therefore, our approach falls within a third kind of automatic assessment tool that has been less investigated: hybrid analysis.…”
Section: Review Of Related Workmentioning
confidence: 99%
“…Tool support is primarily aimed at helping the tutor. Atelier uses two tools, Zita to highlight potential programming issues [3], and Apollo to estimate whether a student achieves certain learning outcomes [6]. Importantly, both tools are not included for marking, or to substitute tutor feedback, but are meant to aid the tutor.…”
Section: The Atelier Platformmentioning
confidence: 99%