2020
DOI: 10.1111/biom.13354
|View full text |Cite
|
Sign up to set email alerts
|

Regularized matrix data clustering and its application to image analysis

Abstract: We propose a novel regularized mixture model for clustering matrix-valued data. The proposed method assumes a separable covariance structure for each cluster and imposes a sparsity structure (e.g., low rankness, spatial sparsity) for the mean signal of each cluster. We formulate the problem as a finite mixture model of matrix-normal distributions with regularization terms, and then develop an EM-type of algorithm for efficient computation. In theory, we show that the proposed estimators are strongly consistent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
39
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(40 citation statements)
references
References 34 publications
1
39
0
Order By: Relevance
“…Ref. [9] suggested a penalized matrix normal mixture model by imposing a penalty for the mean matrix. This method can capture the row-wise and column-wise correlation simultaneously and has the ability to find the sparsity nature that is inherent in the signals and images.…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…Ref. [9] suggested a penalized matrix normal mixture model by imposing a penalty for the mean matrix. This method can capture the row-wise and column-wise correlation simultaneously and has the ability to find the sparsity nature that is inherent in the signals and images.…”
Section: Introductionmentioning
confidence: 99%
“…This method can capture the row-wise and column-wise correlation simultaneously and has the ability to find the sparsity nature that is inherent in the signals and images. However, the approach of [9] shows weakness for high dimension/low sample size data because it cannot reduce the number of precision parameters to estimate.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations