Proceedings of the 2017 ACM Conference on International Computing Education Research 2017
DOI: 10.1145/3105726.3106186
|View full text |Cite
|
Sign up to set email alerts
|

Principled Assessment of Student Learning in High School Computer Science

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 52 publications
(8 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…Efforts to create high-quality K-12 CS assessments have been plagued by several issues, including the field's difficulties in reaching consensus on the definition and scope of the various components of and terminology within the CS educational landscape as well as the fact that many skills and ideas in CS are broad, complex, and multi-part (Webb et al, 2017;Tang et al, 2020). Muddiness around the construct definitions and learning progressions can yield assessments that focus on surface-level knowledge rather than deeper application of skills and practices (Denning, 2017), and/or assessments that primarily reflect specifics of a given course rather than more general, courseagnostic knowledge and skills (Snow et al, 2017).…”
Section: Assessment Contextmentioning
confidence: 99%
“…Efforts to create high-quality K-12 CS assessments have been plagued by several issues, including the field's difficulties in reaching consensus on the definition and scope of the various components of and terminology within the CS educational landscape as well as the fact that many skills and ideas in CS are broad, complex, and multi-part (Webb et al, 2017;Tang et al, 2020). Muddiness around the construct definitions and learning progressions can yield assessments that focus on surface-level knowledge rather than deeper application of skills and practices (Denning, 2017), and/or assessments that primarily reflect specifics of a given course rather than more general, courseagnostic knowledge and skills (Snow et al, 2017).…”
Section: Assessment Contextmentioning
confidence: 99%
“…However, the authors do not reveal how the instrument was developed, and whether it is validated. Snow et al [77], created a validated instrument that measures the practices of CT. The authors argued that relying on the programming constructs to infer students' knowledge of computer science is not enough.…”
Section: Ct Assessmentmentioning
confidence: 99%
“…The ECD process involved (1) working with various stakeholders to identify the important computer science skills to measure, (2) mapping those skills to a model of evidence that can support inferences about those skills, and (3) developing tasks that elicit that evidence [2]. The assessments were field tested with 941 students over two years [21].…”
Section: Assessment Of Computational Thinking Practicesmentioning
confidence: 99%
“…The tasks within each assessment are well aligned with each other and with the targeted learning goals. See Snow, Rutstein, Bienkowski, and Xu [21] for additional details on the pilot study and validation results.…”
Section: Assessment Of Computational Thinking Practicesmentioning
confidence: 99%