2014
DOI: 10.1002/sce.21118
|View full text |Cite
|
Sign up to set email alerts
|

Using Rasch Measurement for the Development and Use of Affective Assessments in Science Education Research

Abstract: With the demand for quality quantitative instruments in the field of science education rising, additional measures of currently unassessed affective variables need to be constructed. In this study, we discuss the survey creation and evaluation process of the STEM Awareness Community Survey (SACS) through an application of Liu's (2010) framework for developing and using new affective instruments in science education. Liu's (2010) survey development framework uses Rasch measurement methods in survey evaluation t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0
4

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 29 publications
(20 citation statements)
references
References 19 publications
0
16
0
4
Order By: Relevance
“…One explanation for these findings might be that removing the misfitting items decreases the reliability of the test scores, which, in turn, might lead to poorer selection decisions and predictive validity. Although in the practice of large-scale educational testing misfitting items are more often replaced or revised rather than removed, in small-scale testing the practice of removing the misfitting items from the test is encountered more often (e.g., Bolt, Deng, & Lee, 2014 ; Sinharay, Haberman, & Jia, 2011 ; Sinharay & Haberman, 2014 ; Sondergeld & Johnson, 2014 ). Therefore, practitioners and researchers should be very careful when removing misfitting items from a test, whenever this possibility exists.…”
Section: Discussionmentioning
confidence: 99%
“…One explanation for these findings might be that removing the misfitting items decreases the reliability of the test scores, which, in turn, might lead to poorer selection decisions and predictive validity. Although in the practice of large-scale educational testing misfitting items are more often replaced or revised rather than removed, in small-scale testing the practice of removing the misfitting items from the test is encountered more often (e.g., Bolt, Deng, & Lee, 2014 ; Sinharay, Haberman, & Jia, 2011 ; Sinharay & Haberman, 2014 ; Sondergeld & Johnson, 2014 ). Therefore, practitioners and researchers should be very careful when removing misfitting items from a test, whenever this possibility exists.…”
Section: Discussionmentioning
confidence: 99%
“…For example, the dichotomous Rasch model has been applied to MC science assessments (e.g., Boone & Scantlebury, 2006). Rasch models for rating scale data (Andrich, 1978;Masters, 1982) have been used to measure affective variables related to science (Sondergeld & Johnson, 2014) including constructs such as socioscientific decision-making strategies (Eggert & Bogeholz, 2009) and self-efficacy (Boone, Townsend, & Starver, 2010).…”
Section: Modeling Student Responses To Mddmc Itemsmentioning
confidence: 99%
“…In the following Rasch modeling, the overall analysis, item uni-dimensionality test, item fitness, and item distribution were operated using WINSTEP 3.72.0 software and with reference to the user manual (Linacre, 2006; and complementary studies (Liu, 2010;Sondergeld & Johnson, 2014) for indices criteria. Table 5 presents a summary of all persons and items analysis results.…”
Section: The Results Of Rasch Analysismentioning
confidence: 99%