2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.480
|View full text |Cite
|
Sign up to set email alerts
|

A Supervised Low-Rank Method for Learning Invariant Subspaces

Abstract: Sparse representation and low-rank matrix decomposition approaches have been successfully applied to several computer vision problems. They build a generative representation of the data, which often requires complex training as well as testing to be robust against data variations induced by nuisance factors. We introduce the invariant components, a discriminative representation invariant to nuisance factors, because it spans subspaces orthogonal to the space where nuisance factors are defined. This allows deve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…As far as we are concerned, this strategy has only been performed in centralized settings. Nonetheless, in a decentralized setting each participant could Changes in the input space through clients Domain transformation Domains with particular features [62,63,64] Domain factorization [65,66,67,68] Personalization [36,37,38,39,45,56] Domain adaptation [69,70,71] Dissimilarity methods [72,73,74,75] Sample reweighting [76,77] Generative adversarial networks [78,79,80,81,82] Figure 1: Classification of the different approaches that are able to solve the problem of spatial heterogeneity in the input spaces.…”
Section: Changes In the Input Space Throughout Clientsmentioning
confidence: 99%
See 1 more Smart Citation
“…As far as we are concerned, this strategy has only been performed in centralized settings. Nonetheless, in a decentralized setting each participant could Changes in the input space through clients Domain transformation Domains with particular features [62,63,64] Domain factorization [65,66,67,68] Personalization [36,37,38,39,45,56] Domain adaptation [69,70,71] Dissimilarity methods [72,73,74,75] Sample reweighting [76,77] Generative adversarial networks [78,79,80,81,82] Figure 1: Classification of the different approaches that are able to solve the problem of spatial heterogeneity in the input spaces.…”
Section: Changes In the Input Space Throughout Clientsmentioning
confidence: 99%
“…For instance, [62,63,64] consider each domain may have its own set of features to characterize the samples, causing incompatibilities across domains, and they develop methods to extract a common feature representation. A different approach, performed in [65,66,67,68], consists of constructing a factorization of the feature space with some properties. [65,66] split the feature space into two orthogonal subspaces: one of them contains the domain variations, whereas the other one keeps the common parts, and both are used separately to perform learning.…”
Section: Changes In the Input Space Throughout Clientsmentioning
confidence: 99%
“…[60,17,18,59,40]) used a learned similarity measure together with nearest neighbors classifier. [48,46,10,44,9] propose methods in which features are learned along with the metric learning. Chechik et al [9] uses similarity measure in large-scale image search.…”
Section: Related Workmentioning
confidence: 99%
“…Siyahjani et al [35] introduce the invariant components to the sparse representation and low-rank matrix decomposition approaches and successfully apply to solve computer vision problems. They add orthogonal constraint to assume that invariant and variant components are linear independent.…”
Section: Low-rank Matrix Decompositionmentioning
confidence: 99%