2020
DOI: 10.1016/j.csda.2019.106822
|View full text |Cite
|
Sign up to set email alerts
|

On parsimonious models for modeling matrix data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 42 publications
(24 citation statements)
references
References 27 publications
0
23
0
1
Order By: Relevance
“…It is reasonable to assume that similar overparameterization concerns can affect also the MN-CWM when the dimensions of the matrices are quite high. To further increase the parsimony of our model, constrained parameterizations of the covariance matrices can be employed, by following the approaches of Sarkar et al (2020), Subedi et al (2013), Punzo and Ingrassia (2015), and Mazza et al (2018). Furthermore, to accommodate skewness or mild outliers in the data, skewed or heavy tailed matrix-variate distributions could also be considered for the mixing components of the model (e.g., Melnykov and Zhu 2018;Gallaugher and McNicholas 2018;Tomarchio et al 2020).…”
Section: Discussionmentioning
confidence: 99%
“…It is reasonable to assume that similar overparameterization concerns can affect also the MN-CWM when the dimensions of the matrices are quite high. To further increase the parsimony of our model, constrained parameterizations of the covariance matrices can be employed, by following the approaches of Sarkar et al (2020), Subedi et al (2013), Punzo and Ingrassia (2015), and Mazza et al (2018). Furthermore, to accommodate skewness or mild outliers in the data, skewed or heavy tailed matrix-variate distributions could also be considered for the mixing components of the model (e.g., Melnykov and Zhu 2018;Gallaugher and McNicholas 2018;Tomarchio et al 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Specifically, constrained parameterizations of the covariance matrices can be employed, both for the distribution of the responses and the distribution of the covariates. This can be done by following two different routes: (i) the eigen decomposition approach in the fashion of Sarkar et al (2020) or (ii) the bilinear factor analyzers method in accordance to Gallaugher & McNicholas (2019b). Both proposals can drastically reduce the number of estimated parameters, allowing for more parsimonious models.…”
Section: Discussionmentioning
confidence: 99%
“…In all these cases we have p variables measured in r different occasions on N observations, so that the data can be arranged in a three-way array structure having the following three dimensions: variables (rows), occasions (columns) and observations (layers). Examples of contributions to the matrix-variate mixture models literature are Viroli (2011a,b), Gallaugher & McNicholas (2018, Melnykov & Zhu (2018, Sarkar et al (2020), Tomarchio et al (2020), Tomarchio, Gallaugher, Punzo & McNicholas (2021).…”
Section: Introductionmentioning
confidence: 99%
“…By varying covariance matrix volumes, orientations, and shapes, 14 alternative models can be obtained. An extension of this idea to the matrix‐variate framework leads to 98 parsimonious models produced as combinations of 14 parameterizations of bold-italicΣ and seven representations of bold-italicΨ (Sarkar et al, ). The number of parameterizations of bold-italicΨ is lower than that of Σ due to an introduced constraint that makes the model identifiable.…”
Section: Methodsmentioning
confidence: 99%
“…Gaussian mixture models for matrix or three‐way data have been proposed and extensively studied by Viroli (, , ). Recently, mixtures capable of modelling nonnormal matrix data groups have been proposed by Dogru et al (), Gallaugher and McNicholas (), Melnykov and Zhu (, ), and Sarkar et al ().…”
Section: Introductionmentioning
confidence: 99%