Computing education researchers and educators use a wide range of approaches for measuring students' prior knowledge in programming. Such measurement can help adapt the learning goals and assessment tools for groups of learners at different skills levels and backgrounds. There seems to be no consensus on if and how prior programming knowledge should be measured. Traditional background surveys are often ad-hoc or non-standard, which do not allow comparison of results between different course contexts, levels, and learner groups. Moreover, surveys may yield inaccurate information and may not be useful due to lack of detail. In contrast, tests can provide much higher detail and accuracy than surveys about student knowledge or skills, but large-scale tests are typically very time-consuming or impractical to arrange. To bridge the gap between ad-hoc surveys and standardized tests, we propose and evaluate a novel self-evaluation instrument for measuring prior programming knowledge in introductory programming courses. This instrument investigates in higher detail typical course concepts in programming education considering the different levels of proficiency. Based on a sample of two thousand introductory programming course students, our analysis shows that the instrument is internally consistent, correlates with traditional background information metrics and identifies students of varying programming backgrounds. CCS CONCEPTS• Social and professional topics → Computer science education; Model curricula; Student assessment.
Programming teachers have a strong need for easy-to-use instruments that provide reliable and pedagogically useful insights into student learning. Currently, no validated tools exist for rapidly assessing student understanding of basic programming knowledge. Concept inventories and the SCS1 questionnaire can offer great benefits; this article explores the additional value that may be gained from relatively simple self-evaluation metrics. We apply a lightweight self-evaluation instrument (SEI) in an introductory programming course and compare the results to existing performance measures, such as examination grades and the SCS1. We find that the SEI has a similar correlation with a program-writing examination as the SCS1 does, although both instruments correlate only moderately with the examination and each other. Furthermore, students are much more likely to voluntarily answer the lightweight SEI than SCS1. Overall, our results suggest that both the SEI and other instruments need to be greatly improved and outline future work towards that end. CCS CONCEPTS• Social and professional topics → Computer science education; Model curricula; Student assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.