2010
DOI: 10.1119/1.3443565
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing force concept inventory with item response theory

Abstract: PACS number(s): 01.40.Fk AbstractItem Response Theory (IRT) is a popular assessment method used in education measurement, which builds on an assumption of a probability framework connecting students' innate ability and their actual performances on test items. The model transforms students' raw test scores through a nonlinear regression process into a scaled proficiency rating, which can be used to compare results obtained with different test questions. IRT also provides a theoretical approach to address ceilin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

8
100
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 92 publications
(109 citation statements)
references
References 10 publications
8
100
0
1
Order By: Relevance
“…Analysis of the data set used in this paper suggests that student scores on the two half-length tests reasonably follow a normal distribution (R 2 > 0.97) and that the condition of this data set is appropriate for conducting IRT analysis. Similar results have also been reported in our previous study on IRT application to FCI data [18].…”
Section: Test Evaluation Using Irt Analysissupporting
confidence: 80%
See 1 more Smart Citation
“…Analysis of the data set used in this paper suggests that student scores on the two half-length tests reasonably follow a normal distribution (R 2 > 0.97) and that the condition of this data set is appropriate for conducting IRT analysis. Similar results have also been reported in our previous study on IRT application to FCI data [18].…”
Section: Test Evaluation Using Irt Analysissupporting
confidence: 80%
“…The typical set of assessment parameters includes test discrimination, test difficulty, and the guessing factor. Existing research has shown that IRT can be used to evaluate features of the FCI test [18,19]. The advantages of using an IRT-based method is that it can maintain assessment consistency when student populations have very different mean scores and that the estimated assessment parameters can help extend the assessment scale into questions not used on the test [16].…”
Section: Test Evaluation Using Irt Analysismentioning
confidence: 99%
“…Many different IRT models have been applied to the FCI [4,[47][48][49]. Of these studies, only Wang and Bao [4] reported the item characteristic curves which show how well the data fit the IRT model; none of their curves showed the dramatic departures from fit reported for some of the engineering conceptual inventories examined by Jorion et al [12], indicating that the items in the FCI are generally performing properly. Only Popp et al [47] reported results disaggregated by gender; these results are describe in Sec.…”
Section: Difficulty and Discriminationmentioning
confidence: 99%
“…Wang and Bao calculated CTT difficulty and discrimination parameters for the FCI pretest of 2800 students at a large university in the U.S. [4]. Five of the items had difficulty parameters outside of the desired range (items 1, 6, and 12 with P > 0.8 and items 17 and 26 with P < 0.2), with none having discrimination less than 0.2.…”
Section: Difficulty and Discriminationmentioning
confidence: 99%
See 1 more Smart Citation