1992
DOI: 10.1111/j.2517-6161.1992.tb01875.x
|View full text |Cite
|
Sign up to set email alerts
|

An Exact Cholesky Decomposition and the Generalized Inverse of the Variance–Covariance Matrix of the Multinomial Distribution, with Applications

Abstract: SUMMARY A symbolic formula is given for the square‐root‐free Cholesky decomposition of the variance–covariance matrix of the multinomial distribution. The evaluation of the symbolic Cholesky factors requires much fewer arithmetic operations than does the general Cholesky algorithm. Since the symbolic formula is not affected by an ill‐conditioned matrix, it is particularly useful when the elements of a probability vector are of quite different orders of magnitude. A simpler formula is obtained for Pederson's pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
29
0

Year Published

1992
1992
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(29 citation statements)
references
References 8 publications
0
29
0
Order By: Relevance
“…The form of the matrix H (x, γ ) and Theorem 1 in Tanabe and Sagae (1992) show that H (x, γ ) is symmetric positive definite with 0 < λ min (H (x, γ )) ≤ λ max (H (x, γ )) < 1, which implies that H (x, γ ) ≥ λ min (H (x, γ )) I J and λ min (H (x, γ )) ≥ det (H (x, γ )). These results and the exact Cholesky decomposition of H (x, γ ) give inf x∈X H (x, γ ) ≥ inf x∈X J t=0 L t (g −0 (x, γ )) I J , in a positive semidefinite sense.…”
Section: Appendix a Proofs Of Theoremsmentioning
confidence: 99%
“…The form of the matrix H (x, γ ) and Theorem 1 in Tanabe and Sagae (1992) show that H (x, γ ) is symmetric positive definite with 0 < λ min (H (x, γ )) ≤ λ max (H (x, γ )) < 1, which implies that H (x, γ ) ≥ λ min (H (x, γ )) I J and λ min (H (x, γ )) ≥ det (H (x, γ )). These results and the exact Cholesky decomposition of H (x, γ ) give inf x∈X H (x, γ ) ≥ inf x∈X J t=0 L t (g −0 (x, γ )) I J , in a positive semidefinite sense.…”
Section: Appendix a Proofs Of Theoremsmentioning
confidence: 99%
“…where e is a K × 1-dimensional vector given by e = (1, 1,..., 1) . The expression for the inverse is given for example in Tanabe and Sagae (1992). The inverse of the covariance matrix p is then given by the block-diagonal matrix −1 p = diag −1 p11 , ··· , −1 pN m s .…”
mentioning
confidence: 99%
“…Since R true is a covariance matrix, it can be factorized using Cholesky decomposition [24]. Therefore, we can write R true = C true C † true , where C true is a lower triangular matrix.…”
Section: Problem Formulation and Optimal Solutionmentioning
confidence: 99%