2019
DOI: 10.1111/biom.13108
|View full text |Cite
|
Sign up to set email alerts
|

Structural Learning and Integrative Decomposition of Multi-View Data

Abstract: The increased availability of multi-view data (data on the same samples from multiple sources) has led to strong interest in models based on low-rank matrix factorizations. These models represent each data view via shared and individual components, and have been successfully applied for exploratory dimension reduction, association analysis between the views, and consensus clustering. Despite these advances, there remain challenges in modeling partially-shared components and identifying the number of components… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 67 publications
(72 citation statements)
references
References 27 publications
0
72
0
Order By: Relevance
“…In the context of vertical integration, the joint and individual scores boldV and Vi have been applied to risk prediction (Kaplan and Lock, ) and clustering (Hellton and Thoresen, ) for high‐dimensional data. Several related techniques, such as AJIVE (Feng et al ., ) and SLIDE (Gaynanova and Li, ), have been proposed (Zhou et al ., ), as well as extensions that allow the adjustment of covariates (Li and Jung, ) or accommodate heterogeneity in the distributional assumptions for different sources (Li and Gaynanova, ; Zhu et al ., ).…”
Section: Introductionmentioning
confidence: 99%
“…In the context of vertical integration, the joint and individual scores boldV and Vi have been applied to risk prediction (Kaplan and Lock, ) and clustering (Hellton and Thoresen, ) for high‐dimensional data. Several related techniques, such as AJIVE (Feng et al ., ) and SLIDE (Gaynanova and Li, ), have been proposed (Zhou et al ., ), as well as extensions that allow the adjustment of covariates (Li and Jung, ) or accommodate heterogeneity in the distributional assumptions for different sources (Li and Gaynanova, ; Zhu et al ., ).…”
Section: Introductionmentioning
confidence: 99%
“…This is a limitation of most data integration methods, and we expect partially shared components to result in even better prediction models. A way to investigate partially shared patterns is provided in the SLIDE method by [37], and is a potential starting point for further work in this direction.…”
Section: Discussionmentioning
confidence: 99%
“…This is critically important for generating meaningful, interpretable results using existing statistical models (45,46). Other methods, such as structural learning and integrative decomposition of multi-view data (47), common orthogonal basis extraction (48), and group factor analysis (49), may also be used. Our results suggest that an attractive feature of JIVE is the performance robustness, consistent with our prior study of brain age prediction (20).…”
Section: Structural Covariation May Reflect Synchronized Developmentmentioning
confidence: 99%