2012 IEEE Conference on Computer Vision and Pattern Recognition 2012
DOI: 10.1109/cvpr.2012.6247923
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Multiview Analysis: A discriminative latent space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
385
0
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 599 publications
(395 citation statements)
references
References 19 publications
1
385
0
1
Order By: Relevance
“…CCA has been used frequently in unsupervised data analysis (Sargin et al, 2006;Chaudhuri et al, 2009;Kumar and Daumé, 2011;Sharma et al, 2012). Deep Canonical Correlation Analysis (DCCA) DCCA aims to lean highly correlated deep architectures, which can be a non-linear extension of CCA .…”
Section: Mue Feature Learning Methodsmentioning
confidence: 99%
“…CCA has been used frequently in unsupervised data analysis (Sargin et al, 2006;Chaudhuri et al, 2009;Kumar and Daumé, 2011;Sharma et al, 2012). Deep Canonical Correlation Analysis (DCCA) DCCA aims to lean highly correlated deep architectures, which can be a non-linear extension of CCA .…”
Section: Mue Feature Learning Methodsmentioning
confidence: 99%
“…In order to validate the contributions of each separate component, three approaches correlation matching (CM), semantic matching (SM) and semantic correlations matching are proposed for the correlation modeling, the abstraction method and the joint working mode of both approaches respectively. Sharma et al [36] proposed Generalized Multiview Analysis (GMA) to extract features from different views. GMA solves a joint, relaxed quadratic constrained quadratic program (QCQP) over different feature spaces to obtain a single linear/non-linear subspace, thus it affords an efficient eigenvalue based solution, and it is applicable to be extended to the cross-modality scenario.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, we apply 1-NN classifier to the subspace obtained by LDA [25], supervised LPP [26], D-GPLVM [20] and GPLRF [27]. We also compared VC-GPM to several stateof-the-art methods for multi-view learning, namely, the Multi-view Discriminant Analysis (mvDA) [28], and methods for Generalized Multiview Analysis (GMA) [29], namely, GM Linear Discriminant Analysis (GMLDA) and GM Locality Preserving Projections (GMLPP), which extend the LDA and LPP [30] to multiple views. Lastly, we also include the results obtained by DS-GPLVM [15], with the GP kernel parameters and the discriminative prior as in our VC-GPM.…”
Section: Datasets and Experimental Proceduresmentioning
confidence: 99%