This paper builds on the current literature base about learning progressions in science to address the question, “What is the nature of the learning progression in the content domain of the structure of matter?” We introduce a learning progression in response to that question and illustrate a methodology, the Construct Modeling (Wilson, ) approach, for investigating the progression through a developmentally based iterative process. This study puts forth a progression of how students understand the structure of matter by empirically inter‐relating constructs of different levels of sophistication using a sample of 1,087 middle grade students from a large diverse public school district in the western part of the United States. The study also shows that student thinking can be more complex than hypothesized as in the case of our discovery of a substructure of understanding in a single construct within the larger progression. Data were analyzed using a multidimensional Rasch model. Implications for teaching and learning are discussed—we suggest that the teacher's choice of instructional approach needs to be fashioned in terms of a model, grounded in evidence, of the paths through which learning might best proceed, working toward the desired targets by a pedagogy which also cultivates students’ development as effective learners. This research sheds light on the need for assessment methods to be used as guides for formative work and as tools to ensure the learning goals have been achieved at the end of the learning period. The development and investigation of a learning progression of how students understand the structure of matter using the Construct Modeling approach makes an important contribution to the research on learning progressions and serves as a guide to the planning and implementation in the teaching of this topic. © 2017 Wiley Periodicals, Inc. J Res Sci Teach 9999:1024–1048, 2017
While the impact of authentic research experiences in STEM on student engagement and interest in science has been documented, less is known about the role of peer communities in fostering this interest and engagement. This research explores the idea that a strong peer community can catalyze deep learning and engagement in scientific research among high school students. The program engaged 20 high school students in a year-long community-based participatory research project in public health each year. The study used a mixed methods approach, combining data from focus group discussions, observations, and surveys to describe the program's impact on participants. Analysis across three years reveals that (a) the program was associated with a statistically significant shift in students' identity as researchers, with a medium growth effect size (Cohen's d) for the second and third years, which moderated by the end of the program, and (b) the peer community played a central role in the participants' engagement in the program, on their identity as researchers, and strengthened their interest in STEM. These findings convey the importance of designing STEM experiences that build strong peer communities around science practices and how such communities can have profound impacts on students' identities in STEM.
Researchers in the areas of psychology and education strive to understand the intersections among validity, educational measurement, and cognitive theory. Guided by a mixed model conceptual framework, this study investigates how respondents' opinions inform the validation argument. Validity evidence for a science assessment was collected through traditional paper-and-pencil tests, surveys, and think-aloud and exit interviews of fifth-and sixth-grade students. Item response theory analyses supplied technical descriptions of evidence investigating the internal structure. Surveys provided information regarding perceived item difficulty and fairness. Think-aloud and exit interviews provided context and response processes information to clarify and explain issues. This research demonstrates how quantitative and qualitative data can be used in concert to inform the validation process and highlights the use of thinkaloud interviews as an explanatory tool.
The Likert item response format for items is almost ubiquitous in the social sciences and has particular virtues regarding the relative simplicity of item-generation and the efficiency for coding responses. However, in this article, we critique this very common item format, focusing on its affordance for interpretation in terms of internal structure validity evidence. We suggest an alternative, the Guttman response format, which we see as providing a better approach for gathering and interpreting internal structure validity evidence. Using a specific survey-based example, we illustrate how items in this alternative format can be developed, exemplify how such items operate, and explore some comparisons between the results from using the two formats. In conclusion, we recommend usage of the Guttman response format for improving the interpretability of the resulting outcomes. Finally, we also note how this approach may be used in tandem with items that use the Likert response format to help balance efficiency with interpretability.
Item explanatory models have the potential to provide insight into why certain items are easier or more difficult than others. Through the selection of pertinent item features, one can gather validity evidence for the assessment if construct-related item characteristics are chosen. This is especially important when designing assessment tasks that address new standards. Using data from the Learning Progressions in Middle School Science (LPS) project, this paper adopts an "item explanatory" approach and investigates whether certain item features can explain differences in item difficulties by applying an extension of the linear logistic test model. Specifically, this paper explores the effects of five features on item difficulty: type (argumentation, content, embedded content), scenario-based context, format (multiple-choice or open-ended), graphics, and academic vocabulary. Interactions between some of these features were also investigated. With the exception of context, all features had a statistically significant effect on difficulty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.