2019
DOI: 10.1007/s42044-019-00037-y
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative and domain invariant subspace alignment for visual tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…DISA 36 is introduced to find a latent feature subspace for both of the source and the target domains. Give a source domain DS=true{xis, yistrue}i=1ns and a target domain DT=true{xittrue}i=1nt where x i s ∈ X s is a sample of the source domain; y i s is the label of the source sample; n s is the number of the labeled source samples; similarly, x i t ∈ X t is a sample of the target domain; n t is the number of the samples in the target domain.…”
Section: The Methods Of Disamentioning
confidence: 99%
See 2 more Smart Citations
“…DISA 36 is introduced to find a latent feature subspace for both of the source and the target domains. Give a source domain DS=true{xis, yistrue}i=1ns and a target domain DT=true{xittrue}i=1nt where x i s ∈ X s is a sample of the source domain; y i s is the label of the source sample; n s is the number of the labeled source samples; similarly, x i t ∈ X t is a sample of the target domain; n t is the number of the samples in the target domain.…”
Section: The Methods Of Disamentioning
confidence: 99%
“…where bold-italicSω=c=1Cnsctrue(msctruem¯strue)true(msctruem¯strue)normalT; bold-italicSb=c=1Cbold-italicXscbold-italicHsctrue(bold-italicXsctrue)normalT; bold-italicSt=bold-italicXtbold-italicHtbold-italicXtnormalT; c=0CMc for c = 0 computes MMD coefficient matrix for marginal distribution; for c = 1 through C , computes MMD coefficient matrix in conditional distribution mode; P = [ P 1 P 2 ] T ∈ R 2 m × k , and k is the number of latent subspace dimensions; β , α , γ , λ are the parameters of DISA model. The detailed steps of the DISA method can be found in reference 36 .…”
Section: The Methods Of Disamentioning
confidence: 99%
See 1 more Smart Citation
“…At the local step, CLGA uses both the class discrimination information and data geometric structures. Discriminative and domain invariant subspace alignment for visual tasks (DISA) [33] aims to embed the various domains data into the relevant feature spaces. DISA globally matches both domains through the distribution divergence minimiziation between domains.…”
Section: Related Workmentioning
confidence: 99%
“…We compare DAB with two baseline ML methods, i.e., NN, FLDA and eight novel DA methods according to the mentioned datasets ( joint distribution adaptation (JDA) [41], transfer joint matching (TJM) [42], discriminative transfer subspace learning via low-rank and sparse representation (LRSR) [43], JACRL [26], VDA [31], CLGA [32], DISA [33] and discriminative joint probability MMD for DA (JPDA) [44]). Since these methods are considered as dimensionality reduction approaches, we train a prediction function on the labeled training data (i.e., NN and SVM classifier), and then apply it on test data to estimate the labels of target domain.…”
Section: Comparison Baselinesmentioning
confidence: 99%