PsycEXTRA Dataset 1996
DOI: 10.1037/e427312008-001
|View full text |Cite
|
Sign up to set email alerts
|

ACT research report series: Estimation of item response models using the EM algorithm for finite mixtures

Abstract: This paper presents a detailed description of maximum likelihood parameter estimation for item response models using the general EM algorithm. In this paper the models are specified using a univariate discrete latent ability variable. When the latent ability variable is discrete the distribution of the observed item responses is a finite mixture, and the EM algorithm for finite mixtures can be used. Maximum likelihood estimates of the item parameters and of the discrete probabilities of the latent ability dist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0
1

Year Published

1999
1999
2018
2018

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(33 citation statements)
references
References 20 publications
0
32
0
1
Order By: Relevance
“…As shown by Woodruff and Hanson (1996), the ML estimation for at iteration s can be simplified by expressing the equation as a function of two additive terms, φ( Δ )+ψ( π ), as follows: and where is the conditional likelihood (often referred to as “posterior” likelihood) for examinee i where Θ i = q k , given the fixed known values y i , , and , and computed by Notice here that the first term φ( Δ ) depends only on Δ and the second term ψ( π ) depends only on π . Thus, the M step at iteration s finds the ML estimates and that maximize φ( Δ ) and ψ( π ), respectively.…”
Section: Fixed Parameter Calibration Using Mmle Via the Em Algorithmmentioning
confidence: 84%
See 2 more Smart Citations
“…As shown by Woodruff and Hanson (1996), the ML estimation for at iteration s can be simplified by expressing the equation as a function of two additive terms, φ( Δ )+ψ( π ), as follows: and where is the conditional likelihood (often referred to as “posterior” likelihood) for examinee i where Θ i = q k , given the fixed known values y i , , and , and computed by Notice here that the first term φ( Δ ) depends only on Δ and the second term ψ( π ) depends only on π . Thus, the M step at iteration s finds the ML estimates and that maximize φ( Δ ) and ψ( π ), respectively.…”
Section: Fixed Parameter Calibration Using Mmle Via the Em Algorithmmentioning
confidence: 84%
“…The MMLE‐EM method for the usual test data in which all items are “free” and need to be calibrated has been well established in the literature (e.g., Bock & Aitkin, 1981; Mislevy, 1984; Mislevy & Bock, 1985; Woodruff & Hanson, 1996). This section describes the essential elements of the MMLE‐EM approach for FPC, which can be viewed as an adapted version of the usual MMLE‐EM method.…”
Section: Fixed Parameter Calibration Using Mmle Via the Em Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, they must be estimated from sample data using, for example, marginal maximum likelihood estimation via the EM algorithm (Bock and Aitkin 1981;Harwell et al 1988;Thissen 1982;Woodruff and Hanson 1996) or Bayes modal estimation via the EM algorithm (Harwell and Baker 1991;Mislevy 1986;Tsutakawa and Lin 1986). The ability distribution is often taken a priori as a unit normal distribution [denoted N(0, 1)], but can be estimated from sample data parametrically (Mislevy 1984) or discretely (Bock and Aitkin 1981;Woodruff and Hanson 1996) using the EM algorithm. The maximum likelihood estimates of relative weights (i.e., probabilities) at discrete ability points are often labeled ''posterior'' weights in the literature (Mislevy and Bock 1990).…”
Section: Practical Issuesmentioning
confidence: 99%
“…Expectation maximization algorithm is useful within both supervised & semi-supervised methods. [1] Several techniques for learning statistical models have been developed recently by researchers in machine learning & data mining. All of these secrets must address a similar set of representational algorithmic choices & must face a set of statistical challenges unique to learning from relational data.…”
Section: Literature Review a Bhawna Nigam (2011) Document Classimentioning
confidence: 99%