2018
DOI: 10.1002/cae.22021
|View full text |Cite
|
Sign up to set email alerts
|

Soploon: A virtual assistant to help teachers to detect object‐oriented errors in students’ source codes

Abstract: When checking students’ source codes, teachers tend to overlook some errors. This work introduces Soploon, a tool that automatically detects novice programmer errors. By using this tool, teachers can reduce the number of overlooked errors. Thus, students receive a more complete and exhaustive feedback about their errors and misconceptions.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…The existence or the planning of an evaluation of the proposed tool was also a condition to include a study to this research. From the selected studies one described an evaluation that is planned to take place (Yang et al, 2015) and the rest of the studies (Alonso et al, 2008; Alonso & Py, 2009; Ardimento et al, 2020; Azimullah et al, 2020; Blau & Moss, 2015; de Andrade Gomes et al, 2017; Dietrich & Kemp, 2008; Dominique et al, 2013; Fehnker & de Man, 2019; Hashiura et al, 2009; Herout & Brada, 2015; Mirmotahari et al, 2019; Silva & Dorça, 2019; Vallejos et al, 2018; Yan et al, 2020; Yang et al, 2018; Zaw et al, 2018) presented a completed evaluation. From the evaluation data, the most accurate and complete were the study evaluation context [V6], the number of participants [V7], and the evaluation outcome [V8].…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The existence or the planning of an evaluation of the proposed tool was also a condition to include a study to this research. From the selected studies one described an evaluation that is planned to take place (Yang et al, 2015) and the rest of the studies (Alonso et al, 2008; Alonso & Py, 2009; Ardimento et al, 2020; Azimullah et al, 2020; Blau & Moss, 2015; de Andrade Gomes et al, 2017; Dietrich & Kemp, 2008; Dominique et al, 2013; Fehnker & de Man, 2019; Hashiura et al, 2009; Herout & Brada, 2015; Mirmotahari et al, 2019; Silva & Dorça, 2019; Vallejos et al, 2018; Yan et al, 2020; Yang et al, 2018; Zaw et al, 2018) presented a completed evaluation. From the evaluation data, the most accurate and complete were the study evaluation context [V6], the number of participants [V7], and the evaluation outcome [V8].…”
Section: Resultsmentioning
confidence: 99%
“…In two studies (Alonso et al, 2008; Alonso & Py, 2009) the tool utilized did not take into account the correctness of alternative solutions of the students. Vallejos et al (2018) came to the conclusion that it is impossible for the proposed tool to detect some specific errors. All these outcomes come to an agreement with Herout & Brada (2015) who mention that fully automatic validation is not a solution to all the problems.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Only six papers evaluate the feedback by comparing the automatically generated feedback to human-generated feedback [74,113,119,144,170,183]. Table 3 shows the count of tools by different evaluation techniques that involved comparing the results from the AAT with a human and the authors' sentiment of the performance of the AAT.…”
Section: Performance Against Human Graders (Rq4 Rq5)mentioning
confidence: 99%