Background: The idea of computational thinking is underpinned by the belief that anyone can learn and use the underlying concepts of computer science to solve everyday problems. However, most studies on the topic have investigated the development of computational thinking through programming activities, which are cognitively demanding. There is a dearth of evidence on how computational thinking augments everyday problem solving when it is decontextualized from programming.Objectives: In this study, we examined how computational thinking, when untangled from the haze of programming, is demonstrated in everyday problem solving, and investigated the features of such solvable problems.Methods: Using a multiple case study approach, we tracked how seven university students used computational thinking to solve the everyday problem of a route planning task as part of an 8-week-long Python programming course.Results and Conclusions: Computational thinking practices are latent in everyday problems, and intentionally structuring everyday problems is valuable for discovering the applicability of computational thinking. Decomposition and abstraction are prominent computational thinking components used to simplify everyday problem solving.Implications: It is important to strike a balance between the correctness of algorithms and simplification of the process of everyday problem solving.
This study attempted to conceptualise and measure learners’ perceptions of their collaborative problem-based learning and peer assessment strategies in a technology-enabled context. Drawing on the extant literature, we integrate collaborative, problem-based and peer assessment learning strategies and propose a new model, the collaborative problem-based learning and peer assessment (Co-PBLa-PA) conceptual framework, which forms the basis of a new psychometrically sound and conceptually based scale, the collaborative problem-based learning and peer assessment strategies inventory (CO-PBLa-PA-SI). The development and validation of the CO-PBLa-PA-SI, based on the methodological and conceptual insights gained from prior research, involved identifying the following four scales: capacity to collaborate, readiness to engage, task-based interest and peer feedback usefulness. An item pool comprising of 16 items was established and verified by two panels of judges using a formalised card sorting procedure. Confirmatory factor analysis was conducted to validate the instrument of a small-scale (N = 164) study. The CO-PBLa-PA-SI scale showed strong construct validity and reliability with a Cronbach’s coefficient alpha ranging from .828 to .880, which suggested strong internal consistency. The resultant instrument is intended as a tool to reliably measure learners’ perceptions of their collaborative problem-based learning and peer assessment strategies in a technology-enabled context.
Implications for practice or policy:
A psychometrically validated scale could be used by a growing community of academicians, educators and instructional designers to assess learners’ collaborative problem-based learning and peer assessment strategies when using interactive technologies;
A systematically collected data set obtained from the CO-PBLa-PA-SI data may have practical implications in terms of informing teachers about appropriate instructional design practices for the enhancement of collaborative, problem-based and peer assessment learning strategies in technology-enabled settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.