Proceedings of the 45th ACM Technical Symposium on Computer Science Education 2014
DOI: 10.1145/2538862.2538947
|View full text |Cite
|
Sign up to set email alerts
|

An assessment model for large project courses

Abstract: Larger project courses, such as capstone projects, are essential in a modern computing curriculum. Assessing such projects is, however, extremely challenging. There are various aspects and tradeoffs of assessments that can affect the quality of a project course. Individual assessments can give fair grading of individuals, but may loose focus of the project as a group activity. Extensive teacher involvement is necessary for objective assessment, but may affect the way students are working. Continuous feedback t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…In a substantial review of the assessment of computing at school and tertiary levels, the Innovation and Technology in Computer Science Education (ITiCSE) working group of Giordano et al (2015) lists the following best practices: de-emphasize syntax; formulate assessments independent from specific programming language by resorting to pseudocodes that avoid any form of ambiguity (Cutts et al, 2014); make use of grading rubrics to make the assessments of projects less subjective and more transparent to students (Vasilevskaya, Broman, & Sandahl, 2014); resort to gamification and competitions to render assessment more engaging (Gouws, Bradshaw, & Wentworth, 2013;hakulinen, Auvinen, & Korhonen, 2015).…”
Section: Practical Considerationsmentioning
confidence: 99%
“…In a substantial review of the assessment of computing at school and tertiary levels, the Innovation and Technology in Computer Science Education (ITiCSE) working group of Giordano et al (2015) lists the following best practices: de-emphasize syntax; formulate assessments independent from specific programming language by resorting to pseudocodes that avoid any form of ambiguity (Cutts et al, 2014); make use of grading rubrics to make the assessments of projects less subjective and more transparent to students (Vasilevskaya, Broman, & Sandahl, 2014); resort to gamification and competitions to render assessment more engaging (Gouws, Bradshaw, & Wentworth, 2013;hakulinen, Auvinen, & Korhonen, 2015).…”
Section: Practical Considerationsmentioning
confidence: 99%
“…Lejk and Wyvill [9] identified 6 different approaches to the allocation of marks to group work that consider: individual versus group marks, student contribution to assessment distribution, weighing factors for individuals and separating individual and group assessed items. In [10], the authors stated that it is very hard to find a single assessment activity that balances between individual and group assessment, involves both teachers and students, and is summative as well as formative.…”
Section: Literature Reviewmentioning
confidence: 99%