2001
DOI: 10.1007/bf02296197
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Inference for Graphical Factor Analysis Models

Abstract: factor analysis, graphical Gaussian models, identification, model comparison, reversible jump MCMC,

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2002
2002
2014
2014

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…Extensions of graphical models which includes latent variables have recently been proposed in the sphere of cross‐sectional analysis (see, e.g. the graphical factor analysis models in Giudici and Stanghellini, 2002). We are currently working on similar extensions within the fields of time series analysis with particular emphasis on the common trends model for partially non‐stationary processes (Stock and Watson, 1988).…”
Section: Discussionmentioning
confidence: 99%
“…Extensions of graphical models which includes latent variables have recently been proposed in the sphere of cross‐sectional analysis (see, e.g. the graphical factor analysis models in Giudici and Stanghellini, 2002). We are currently working on similar extensions within the fields of time series analysis with particular emphasis on the common trends model for partially non‐stationary processes (Stock and Watson, 1988).…”
Section: Discussionmentioning
confidence: 99%
“…We also remark that an extension of the reversible jump algorithm for undirected decomposable graphs with latent variables is presented and discussed in a recent paper by Giudici and Stanghellini (1999), the extension the directed case is not available yet, but can be accomplished along similar lines. Finally, the reversible jump approach also permits extensions for non-conjugate priors, such as hierarchical ones, which may diminish the sensitivity of inferences to the prior distribution (see e.g.…”
Section: Discussionmentioning
confidence: 99%
“…The supervised extraction techniques include Linear Discriminant Analysis (LDA) [33], Kernel Fisher Discriminant (KFD) Analysis [34], and an extraction framework in a kernel space [14], etc. In unsupervised learning, linear methods (such as Principle Component Analysis (PCA) [19] and factor analysis [20]) and nonlinear methods (metric learning [21] and manifold learning [22], [26], [27], [28]) are proposed.…”
Section: Related Workmentioning
confidence: 99%