2020
DOI: 10.1002/wics.1531
|View full text |Cite
|
Sign up to set email alerts
|

Item response theory and its applications in educational measurement Part I: Item response theory and its implementation in R

Abstract: Item response theory (IRT) is a class of latent variable models, which are used to develop educational and psychological tests (e.g., standardized tests, personality tests, tests for licensure, and certification). We review the theory and practices of IRT across two articles. In Part 1, we provide a broad range of topics such as foundations of educational measurement, basics of IRT, and applications of IRT using R. We focus particularly on the topics that the mirt package covers. These include unidimensional a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 91 publications
0
2
0
Order By: Relevance
“…A bifactor graded response model (Gibbons et al, 2007;Cai et al, 2011;Toland et al, 2017) was fit to item response data for symptoms and impairment items separately in the exploratory subsample, with results from the EGA community assignment used to preliminarily assign items to specific factors. Models were fit using the mirt R package (Chalmers, 2012), with the Bock and Aitkin (1981) Expectation-Maximization algorithm employed for models without cross-loadings on specific factors and the Quasi-Monte Carlo Expectation-Maximization algorithm (Hori et al, 2020) employed for all other models.…”
Section: Structural Analyses Of the Duke-vanderbilt Misophonia Screen...mentioning
confidence: 99%
“…A bifactor graded response model (Gibbons et al, 2007;Cai et al, 2011;Toland et al, 2017) was fit to item response data for symptoms and impairment items separately in the exploratory subsample, with results from the EGA community assignment used to preliminarily assign items to specific factors. Models were fit using the mirt R package (Chalmers, 2012), with the Bock and Aitkin (1981) Expectation-Maximization algorithm employed for models without cross-loadings on specific factors and the Quasi-Monte Carlo Expectation-Maximization algorithm (Hori et al, 2020) employed for all other models.…”
Section: Structural Analyses Of the Duke-vanderbilt Misophonia Screen...mentioning
confidence: 99%
“…Hambleton and Swaminathan (2013) s a id th a t IRT is predicated on three premises: parameter invariance (item characteristics do not depend on the distribution of participant ability parameters and participant characteristic parameters do not depend on item characteristics), local independence (the attitude of the ability that affects a test is constant), and one-dimensionality (each test item only measures one ability). Hori et al (2022) have developed analytical methods and MIRT package-implemented algorithms. Different item response models, such as unidimensional and multidimensional for dichotomous and polytomous, are included in MIRT packages.…”
Section: Theoretical Studymentioning
confidence: 99%
“…Persons' background characteristics are often captured in a set of covariates of different types, such as age (continuous), educational background (multi-categorical), or whether or not they are a native speaker of the test language (dichotomous). A challenge in DIF and DSF analysis is that traditional psychometric approaches used to detect DIF and DSF, such as the Wald test or multigroup analyses (e.g., Hori et al, 2020;E. S. Kim & Yoon, 2011), oftentimes require dichotomous covariates.…”
Section: Introductionmentioning
confidence: 99%