The Wiley Handbook of Psychometric Testing 2018
DOI: 10.1002/9781118489772.ch15
|View full text |Cite
|
Sign up to set email alerts
|

Unidimensional Item Response Theory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 73 publications
0
9
0
Order By: Relevance
“…Furthermore, the automated item selection procedure (AISP) for the Mokken scale analysis indicated that all items belong to a single scale (unidimensional scale). That is, in a case where an item does not belong on the tested scale, that item violates the scale's assumed monotonicity: It adds no meaningful information (Meijer & Tendeiro, 2018). When the monotonicity assumption for all 21 scale items was explored, no significant violation was found (Table A1).…”
Section: Methodsmentioning
confidence: 99%
“…Furthermore, the automated item selection procedure (AISP) for the Mokken scale analysis indicated that all items belong to a single scale (unidimensional scale). That is, in a case where an item does not belong on the tested scale, that item violates the scale's assumed monotonicity: It adds no meaningful information (Meijer & Tendeiro, 2018). When the monotonicity assumption for all 21 scale items was explored, no significant violation was found (Table A1).…”
Section: Methodsmentioning
confidence: 99%
“…Within the educational arena, two main theories of assessment can be used for this purpose: Classical Test Theory (CTT) and Item Response Theory (IRT). Among the differences, widely documented elsewhere (DeMars, 2010;Embretson & Reise, 2000;Hambleton et al, 1991), IRT: a) explicitly models the interaction between the item characteristics (e.g., di culty, discrimination and guessing) and a person's latent variable (denoted by the Greek letter theta; θ) (Meijer & Tendeiro, 2018); allowing the estimation of essentially sample-independent parameters to evaluate item quality; b) opens the possibility to analyze test information (and thus reliability) at different θ ranges (Hambleton et al, 2010), which differs from the usual CTT view of a general summary of reliability (e.g., Cronbach's alpha, or McDonald's omega) for all levels across θ; c) allows the use of mixedformat tests (e.g., tests composed of both single and multiple selection items), with no unbalanced impact upon tests scores (Embretson & Reise, 2000); d) offers models to perform robust analysis of distractors in tests (e.g., R. Bock, 1972).…”
Section: Introductionmentioning
confidence: 99%
“…Within the educational arena, two main theories of assessment can be used for this purpose: Classical Test Theory (CTT) and Item Response Theory (IRT). Among the differences, widely documented elsewhere (DeMars, 2010; Embretson & Reise, 2000;Hambleton et al, 1991), IRT: a) explicitly models the interaction between the item characteristics (e.g., di culty, discrimination and guessing) and a person's latent variable (denoted by the Greek letter theta; θ) (Meijer & Tendeiro, 2018); allowing the estimation of essentially sample-independent parameters to evaluate item quality; b) opens the possibility to analyze test information (and thus reliability) at different θ ranges (Hambleton et al, 2010), which differs from the usual CTT view of a general summary of reliability…”
Section: Introductionmentioning
confidence: 99%