Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education 2020
DOI: 10.1145/3341525.3387430
|View full text |Cite
|
Sign up to set email alerts
|

Automated Assessment of Android Exercises with Cloud-native Technologies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 21 publications
0
11
0
Order By: Relevance
“…However, in order to allow tests to run successfully on student code, they had to impose restrictions to the UI (including on identifiers names), and when they did not put any further restriction in the last phase of development, not to limit the creativity of student, this greatly reduced the success rate. Bruzual et al [8] presented a system for automated assessment of Android exercises carried out by running exercise-specific unit tests on the APK file, relieving the tutor from the need to compile the student's submission. However, while this approach seems to scale well, ignoring the source code can limit the possibility to offer more insightful feedback to students.…”
Section: Discussionmentioning
confidence: 99%
“…However, in order to allow tests to run successfully on student code, they had to impose restrictions to the UI (including on identifiers names), and when they did not put any further restriction in the last phase of development, not to limit the creativity of student, this greatly reduced the success rate. Bruzual et al [8] presented a system for automated assessment of Android exercises carried out by running exercise-specific unit tests on the APK file, relieving the tutor from the need to compile the student's submission. However, while this approach seems to scale well, ignoring the source code can limit the possibility to offer more insightful feedback to students.…”
Section: Discussionmentioning
confidence: 99%
“…The goal is to measure how well the submitted code does what it is meant to do. In [32], the authors propose a system to automatically assess that some functionalities have been correctly implemented as an Android mobile application. In [33], submitted codes are run against tests to check whether they conform to the requirements defined by instructors.…”
Section: Code Semanticmentioning
confidence: 99%
“…Other kinds of feedback are also possible. In [32], the authors propose to include a screenshot of the mobile application just before a failure is detected, to help learners debug their code. This screenshot can also be used for learners to interpret the test results and the possible error stack trace.…”
Section: Other Kinds Of Feedbackmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach is not scalable especially in the context of online courses such as SuaCode with thousands of learners. In addition, although research and development has been done to automate assessment of some graphical and mobile application assignments, none has been done for emerging languages geared towards the emerging Interactive Arts such as the Processing language [2,9,25]. Moreover, none are fully integrated to an online course system and none perform syntax, semantic, style in addition to dynamic analysis.…”
Section: Introductionmentioning
confidence: 99%