2003
DOI: 10.1111/1467-9469.00317
|View full text |Cite
|
Sign up to set email alerts
|

Penalized Maximum Likelihood Estimator for Normal Mixtures

Abstract: The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood. Due to unboundedness of the likelihood function, the maximum likelihood estimator fails to exist. We adopt a solution to likelihood function degeneracy which consists in penalizing the likelihood function. The resulting penalized likelihood function is then bounded over the parameter space and the existence of the penalized maximum likelihood estimator is granted. As original contribut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
54
0
2

Year Published

2006
2006
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 97 publications
(57 citation statements)
references
References 31 publications
1
54
0
2
Order By: Relevance
“…Such an approach has also been used to obtain nondegenerate covariance matrices in finite mixtures of normal densities (Ciuperca, Ridolfi, & Idier, 2003;Vermunt & Magidson, 2005) and in multivariate regression (Warton, 2008). Our penalized likelihood approach to avoid boundary estimates for variance parameters in multilevel models turns out to be similar to, but more general than, the independently developed adjustment for density maximization approach by Morris and Tang (2011).…”
Section: Introductionmentioning
confidence: 96%
“…Such an approach has also been used to obtain nondegenerate covariance matrices in finite mixtures of normal densities (Ciuperca, Ridolfi, & Idier, 2003;Vermunt & Magidson, 2005) and in multivariate regression (Warton, 2008). Our penalized likelihood approach to avoid boundary estimates for variance parameters in multilevel models turns out to be similar to, but more general than, the independently developed adjustment for density maximization approach by Morris and Tang (2011).…”
Section: Introductionmentioning
confidence: 96%
“…The penalized likelihood such as (2.1) was formulated, for example, in Ciuperca et al (2003) and Chen and Tan (2009). Chen and Tan (2009) especially used a penalty that depends on the covariance matrices and the sample size.…”
Section: Penalized Likelihoodmentioning
confidence: 99%
“…However, a more general approach is to use penalized maximum likelihood estimation(PMLE), where new likelihood function is formulated by adding a penalty term. Several authors including Ciuperca et al (2003), Ingrassia (2004), and Chen and Tan (2009) used this approach and what they used as a constraint was mostly on variances or variance matrices in multivariate cases.…”
Section: Introductionmentioning
confidence: 99%
“…To resolve this problem, a constrained maximum likelihood estimator(MLE) (Hathaway, 1985;Tanaka and Takemura, 2006) uses a constraint on the scale parameters to compactify the parameter space. A penalized MLE proposed by Ciuperca et al (2003) and Chen et al (2008) adds some penalty functions to the ordinary likelihood so that the likelihood does not explode when one of the scale parameters goes to zero. The penalized MLE can also be considered as a Bayesian estimator (Fraley and Raftery, 2007) with an inverse Gamma or Wishart prior for scale parameters.…”
Section: Introductionmentioning
confidence: 99%
“…The penalized MLE can also be considered as a Bayesian estimator (Fraley and Raftery, 2007) with an inverse Gamma or Wishart prior for scale parameters. The penalized MLE and constrained MLE can be obtained using slight modification of the EM algorithm for normal mixture models (Hathaway, 1986;Ingrassia and Rocci, 2007;Ciuperca et al, 2003;Chen et al, 2008).…”
Section: Introductionmentioning
confidence: 99%