2022
DOI: 10.1093/biomet/asab056
|View full text |Cite
|
Sign up to set email alerts
|

Generalized infinite factorization models

Abstract: Summary Factorization models express a statistical object of interest in terms of a collection of simpler objects. For example, a matrix or tensor can be expressed as a sum of rank-one components. However, in practice, it can be challenging to infer the relative impact of the different components as well as the number of components. A popular idea is to include infinitely many components having impact decreasing with the component index. This article is motivated by two limitations of existing m… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(16 citation statements)
references
References 32 publications
0
16
0
Order By: Relevance
“…Roy et al included a comparison to Bayesian multi-study factor analysis [52], which can be implemented via the MSFA package [53]. The generalized infinite factor model (GIF-SIS) allows for the inclusion of such information in exploratory FA, while learning the number of factors and the loadings structure flexibly [13]. Schiavon et al [13] compared performance with popular Bayesian FA methods based on the multiplicative gamma process [54], as implemented in the hmsc package [55].…”
Section: Pattern Identificationmentioning
confidence: 99%
“…Roy et al included a comparison to Bayesian multi-study factor analysis [52], which can be implemented via the MSFA package [53]. The generalized infinite factor model (GIF-SIS) allows for the inclusion of such information in exploratory FA, while learning the number of factors and the loadings structure flexibly [13]. Schiavon et al [13] compared performance with popular Bayesian FA methods based on the multiplicative gamma process [54], as implemented in the hmsc package [55].…”
Section: Pattern Identificationmentioning
confidence: 99%
“…. , m. In recent work by Schiavon et al [25], the CUSP prior is extended to generalized infinite factorization models, where the factor loadings β ih are allowed to be exact zeros, see also [9]. For illustration, we consider here a finite generalized CUSP prior on the column-specific shrinkage parameter θ h in a finite overfitting model with H ≤ H max , where H max = (m − 1)/2 is equal to the upper bound of Anderson & Rubin [26], ensuring econometric identification.…”
Section: Application In Sparse Bayesian Factor Analysis (A) Column-sp...mentioning
confidence: 92%
“…from some probability distribution with support on R + , such as the gamma distribution. We find it more convenient to specify a shrinkage prior on Λ, to automatically select the number of latent factors H. This approach has received considerable attention in Gaussian latent factor models, see, for instance, Bhattacharya and Dunson (2011); Legramanti et al (2020); Schiavon et al (2022). In our example, we consider Λ distributed as a multiplicative gamma process (Bhattacharya and Dunson, 2011),…”
Section: Normalized Latent Measure Factor Modelsmentioning
confidence: 99%
“…In that context, a common practice to recover identifiability is to constrain the matrix Λ to be lower triangular with positive entries on the diagonal (Geweke and Zhou, 2015). More recently, it has been proposed to ignore the identifiability issue and obtain a pointestimate of the posterior distribution either by post-processing the MCMC chains (see Papastamoulis and Ntzoufras, 2022;Poworoznek et al, 2021, and the references therein) or by choosing the maximum a posteriori (Schiavon et al, 2022). In particular, Poworoznek et al (2021) propose to orthogonalize each posterior sample of Λ and then solve the sign ambiguity and label switching via a greedy matching algorithm.…”
Section: Resolving the Non-identifiability Via Post-processingmentioning
confidence: 99%