2003
DOI: 10.1007/978-0-387-21540-2_2
|View full text |Cite
|
Sign up to set email alerts
|

Distributions on the Special Manifolds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(39 citation statements)
references
References 0 publications
0
39
0
Order By: Relevance
“…This is because the structural tensor (18) is positive definite, and, therefore, the pixels of J(x) defined as above are no more elements of an Euclidean space, but rather of a nonlinear manifold. Fortunately, the availability of kernel-based methods for estimating probability densities defined over nonlinear manifolds makes it possible to apply our approach in this situation, as well [41].…”
Section: B Examples Of Discriminative Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…This is because the structural tensor (18) is positive definite, and, therefore, the pixels of J(x) defined as above are no more elements of an Euclidean space, but rather of a nonlinear manifold. Fortunately, the availability of kernel-based methods for estimating probability densities defined over nonlinear manifolds makes it possible to apply our approach in this situation, as well [41].…”
Section: B Examples Of Discriminative Featuresmentioning
confidence: 99%
“…It is interesting to note that, in this case, the velocity V(x) is given by the log-likelihood ratio (40) Thus, for example, in the case when the feature image J(x) is formed by image intensities (i.e., J(x) = I(x)), and the densities p − (z) and p + (z) are Gaussian with their mean values equal to μ − and μ + , and their variances equal to , respectively, the velocity (40) becomes (41) where the estimates (42) are supposed to be recomputed at each iteration.…”
Section: Comparative Studymentioning
confidence: 99%
“…The covariance in this case is defined in terms of the rows and columns of the matrix and is not necessarily orthogonally invariant. A connection between this distribution and James' is made by Chikuse [7]. In modern random matrix theory, the most common Gaussian distribution for symmetric matrices is the Gaussian Orthogonal Ensemble (GOE), which was developed independently in the physics literature (Mehta [24]).…”
Section: Introduction Consider the Signal-plus-noise Modelmentioning
confidence: 99%
“…There exist only a few distributions on the Stiefel or Grassmann manifolds [7], [8], the most popular being the Bingham or von Mises Fisher (vMF) distributions which have proven to be relevant in a number of applications including meteorology, biology, medicine, image analysis (see [7] and references therein), modeling of multipath communications channels [9] and shape analysis [10]. These distributions depend on a matrix whose range space is "close" to that of U , along with a concentration parameter that rules the distance between the subspaces.…”
Section: B Prior Distributionsmentioning
confidence: 99%
“…These distributions depend on a matrix whose range space is "close" to that of U , along with a concentration parameter that rules the distance between the subspaces. More specifically, we assume that R (U ) is close to a given subspace spanned by the columns of an orthonormal matrixŪ and we consider the following prior distributions for U [7], [8]:…”
Section: B Prior Distributionsmentioning
confidence: 99%