2020
DOI: 10.1214/19-ba1176
|View full text |Cite
|
Sign up to set email alerts
|

Conjugate Priors and Posterior Inference for the Matrix Langevin Distribution on the Stiefel Manifold

Abstract: Directional data emerges in a wide array of applications, ranging from atmospheric sciences to medical imaging. Modeling such data, however, poses unique challenges by virtue of their being constrained to non-Euclidean spaces like manifolds. Here, we present a unified Bayesian framework for inference on the Stiefel manifold using the Matrix Langevin distribution. Specifically, we propose a novel family of conjugate priors and establish a number of theoretical properties relevant to statistical inference. Conju… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 51 publications
0
5
0
Order By: Relevance
“…The model considered here, as most of the literature on statistics for Stiefel manifolds, is estimated with MLE or MAP. Recently, [46] proposed a Bayesian framework for von Mises-Fisher distributions which allows computing the posterior distribution of F given observations of X. An interesting question would be to analyze the behavior of this posterior distribution in a hierarchical model where X is a latent variable, in a direction similar to the works of [41] and [20].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The model considered here, as most of the literature on statistics for Stiefel manifolds, is estimated with MLE or MAP. Recently, [46] proposed a Bayesian framework for von Mises-Fisher distributions which allows computing the posterior distribution of F given observations of X. An interesting question would be to analyze the behavior of this posterior distribution in a hierarchical model where X is a latent variable, in a direction similar to the works of [41] and [20].…”
Section: Discussionmentioning
confidence: 99%
“…The distribution considered for X is the von Mises-Fisher (vMF) distribution, also called Matrix Langevin distribution in the literature. It was first introduced by [32], who derived basic properties of the distribution and its Maximum Likelihood Estimator (MLE), and was further studied for both theoretical and algorithmic purposes [12,30,35,46]. The von Mises-Fisher distribution over V np is defined by its probability density function (p.d.f.)…”
Section: Eigenvectors Distributionmentioning
confidence: 99%
“…Correlation can be introduced between the patterns by using Fisher–Bingham distributions on the Stiefel manifold [ 38 ] and between pattern weights with full Gaussian covariance matrices. Another direction to develop is the quantification of the uncertainty: by adding prior distributions on F and , a Bayesian analysis would naturally provide posterior confidence regions for the model parameters [ 47 ]. Finally, our framework could be adapted to model graph Laplacian matrices instead of adjacency matrices.…”
Section: Discussionmentioning
confidence: 99%
“…The main obstacle to retrieving the parameter F given samples is the normalizing constant of the distribution: though analytically known, it is hard to compute in practice (see Pal et al [ 47 ] for a computation procedure when ). Jupp and Mardia [ 48 ] proved that the MLE exists and is unique as long as and , or and .…”
Section: A Maximum Likelihood Estimation Algorithmmentioning
confidence: 99%
“…Parameter estimation in models of directional data have largely focused on large sample asymptotics, maximum likelihood methods and Bayesian methods [12,37].…”
Section: Introductionmentioning
confidence: 99%