2005
DOI: 10.1007/s10208-004-0156-8
|View full text |Cite
|
Sign up to set email alerts
|

Solving the Likelihood Equations

Abstract: Given a model in algebraic statistics and data, the likelihood function is a rational function on a projective variety. Algebraic algorithms are presented for computing all critical points of this function, with the aim of identifying the local maxima in the probability simplex. Applications include models specified by rank conditions on matrices and the Jukes-Cantor models of phylogenetics. The maximum likelihood degree of a generic complete intersection is also determined.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
158
0
1

Year Published

2008
2008
2023
2023

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 79 publications
(163 citation statements)
references
References 13 publications
4
158
0
1
Order By: Relevance
“…It is tacitly assumed that s < k. If P has k or more generators then we can obtain a bound by replacing P by a suitable subideal. The following result appeared as Theorem 5 in [61]. …”
Section: Likelihood Equations For Implicit Modelsmentioning
confidence: 97%
See 2 more Smart Citations
“…It is tacitly assumed that s < k. If P has k or more generators then we can obtain a bound by replacing P by a suitable subideal. The following result appeared as Theorem 5 in [61]. …”
Section: Likelihood Equations For Implicit Modelsmentioning
confidence: 97%
“…We refer to [61] for details on the practical implementation of the algorithm, including the delicate work of computing in the quotient ring R[V ] = R[p 1 . .…”
Section: Algorithm 229 (Computing the Likelihood Ideal)mentioning
confidence: 99%
See 1 more Smart Citation
“…Exact maximum likelihood estimation (e.g., Yang 2000; Hosten et al 2005;Casanellas et al 2005) as well as exact posterior sampling (Sainudiin and York 2009) is only feasible for small sample sizes (n ≤ 4). The standard approach is to rely on Monte Carlo Markov chain (MCMC) algorithms (Metropolis et al 1953;Hastings 1970) to obtain dependent samples from the posterior under the assumption that the algorithm has converged to the desired stationary distribution.…”
Section: Multiple Sequence Alignmentmentioning
confidence: 99%
“…In an ideal world, the optimal inference procedure would be based on the minimally sufficient statistic and implemented in a computing environment free of engineering constraints. Unfortunately, minimally sufficient statistics of data at the currently finest resolution of U m n are unknown beyond the simplest models of mutation with small values of n (Yang 2000;Hosten et al 2005;Casanellas et al 2005;Sainudiin and York 2009). Computationally-intensive inference, based on an observed u o ∈ U m n , with realistically large n and m, is currently impossible for recombining loci and prohibitive for non-recombining loci.…”
Section: Introductionmentioning
confidence: 99%