Student science proficiency development demands sustainable and coherent learning environment support. Scholars argue that project‐based learning (PBL) is an efficient approach to promoting student science learning, compared to conventional instructions. Yet, few studies have delved into the learning process to explore how a coherent PBL system consisting of curriculum, instruction, assessment, and professional learning promotes student learning. To address the gap, this study investigated whether students' science proficiency on the three post‐unit assessments predicted their achievement on a third‐party‐designed end‐of‐year summative science test in a coherent high school chemistry PBL system aligned with the recent US science standards. The study employed a cluster randomized experimental design to test an intervention using our PBL system and only used data from the treatment group. The sample consisted of 1344 treatment students who participated in our PBL intervention and underwent the pretest and end‐of‐year summative test. Students' responses to the three post‐unit assessments were selected and rated to indicate their science proficiency. Two‐level hierarchical linear models were employed to explore the effects of students' performances of three post‐unit assessments on their end‐of‐year summative achievement, considering and controlling for student prior knowledge (i.e., pretest and prior post‐unit assessments). This study suggests two main findings. First, students' science proficiency in the three units could cumulatively and individually predict their summative science achievement. Second, students' performances on the two types of tasks (i.e., developing and using models) in the three post‐unit assessments could also be used to predict their summative science achievement. This research contributes to the field by showing that a coherent standards‐aligned PBL system can significantly and sustainably impact student science proficiency development.
Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student‐developed models is time‐ and cost‐intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learning (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student‐drawn models and their written descriptions of those models. We developed six modeling assessment tasks for middle school students that integrate disciplinary core ideas and crosscutting concepts with the modeling practice. For each task, we asked students to draw a model and write a description of that model, which gave students with diverse backgrounds an opportunity to represent their understanding in multiple ways. We then collected student responses to the six tasks and had human experts score a subset of those responses. We used the human‐scored student responses to develop ML algorithmic models (AMs) and to train the computer. Validation using new data suggests that the machine‐assigned scores achieved robust agreements with human consent scores. Qualitative analysis of student‐drawn models further revealed five characteristics that might impact machine scoring accuracy: Alternative expression, confusing label, inconsistent size, inconsistent position, and redundant information. We argue that these five characteristics should be considered when developing machine‐scorable modeling tasks.
This study intends to develop a standardized instrument for measuring classroom teaching and learning in secondary chemistry lessons. Based on previous studies and interviews with expert teachers, the progression of five quality levels was constructed hypothetically to represent the quality of chemistry lessons in Chinese secondary schools. The measurement instrument was revised from the Evaluation Scale of Effectiveness of Primitive System of Classroom Teaching (ESEPrSCT). 90 videotaped chemistry lessons were collected and measured to validate the instrument in the pilot and field stage. By means of Rasch modeling, the instrument consisting of 18 items with five response categories was finally validated in this study. The results provide the validity and reliability evidence for using this measurement instrument to assess the quality of chemistry lessons.
This study aims to develop and validate a new instrument for measuring chemistry teachers’ perceptions of Pedagogical Content Knowledge for teaching Chemistry Core Competencies (PCK_CCC) in the context of new...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.