2022
DOI: 10.1007/s11336-022-09863-9
|View full text |Cite
|
Sign up to set email alerts
|

Computation for Latent Variable Model Estimation: A Unified Stochastic Proximal Framework

Abstract: Latent variable models have been playing a central role in psychometrics and related fields. In many modern applications, the inference based on latent variable models involves one or several of the following features: (1) the presence of many latent variables, (2) the observed and latent variables being continuous, discrete, or a combination of both, (3) constraints on parameters, and (4) penalties on parameters to impose model parsimony. The estimation often involves maximizing an objective function based on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 81 publications
0
11
0
Order By: Relevance
“…The second stage continues with w = 0, where the average of the iterates are used as starting values for the next stage. For the final stage, the iterative averaging procedure with 12<w<1 was shown as promising for finite-sample problems (Polyak & Juditsky, 1992; Ruppert, 1991), which has also been successfully applied to the parameter estimation of latent variable models (Zhang & Chen, 2020). Specifically, trueθ¯t=(ts)1h=s+1ttrueθ^h shows this, where each of the trueθ^h is computed as in the typical RM algorithm and s denotes the “burn-in” length.…”
Section: Efficient Metropolis-hastings Robbins-monro Algorithmmentioning
confidence: 99%
“…The second stage continues with w = 0, where the average of the iterates are used as starting values for the next stage. For the final stage, the iterative averaging procedure with 12<w<1 was shown as promising for finite-sample problems (Polyak & Juditsky, 1992; Ruppert, 1991), which has also been successfully applied to the parameter estimation of latent variable models (Zhang & Chen, 2020). Specifically, trueθ¯t=(ts)1h=s+1ttrueθ^h shows this, where each of the trueθ^h is computed as in the typical RM algorithm and s denotes the “burn-in” length.…”
Section: Efficient Metropolis-hastings Robbins-monro Algorithmmentioning
confidence: 99%
“…(2016) applied the EM algorithm (Dempster et al., 1977) combined with the coordinate descent algorithm (Friedman, Hastie, & Tibshirani, 2010), and they chose the regularization parameter with the smallest BIC value. Zhang and Chen (2021) proposed a quasi‐Newton stochastic proximal algorithm to optimize the L 1 penalized log‐likelihood, and they proved that their algorithm converges to a stationary point of the L 1 penalized log‐likelihood. Moreover, Zhang and Chen (2021) did not assume constraint (ii) on A as described in Section 2.1, and they stated that the rotational indeterminacy issue is resolved in the L 1 regularized estimator.…”
Section: Latent Variable Selection In Mirt Modelsmentioning
confidence: 99%
“…Zhang and Chen (2021) proposed a quasi‐Newton stochastic proximal algorithm to optimize the L 1 penalized log‐likelihood, and they proved that their algorithm converges to a stationary point of the L 1 penalized log‐likelihood. Moreover, Zhang and Chen (2021) did not assume constraint (ii) on A as described in Section 2.1, and they stated that the rotational indeterminacy issue is resolved in the L 1 regularized estimator. Unfortunately, this is not true in the L 0 regularized estimator in this paper.…”
Section: Latent Variable Selection In Mirt Modelsmentioning
confidence: 99%
See 2 more Smart Citations