2020
DOI: 10.1007/s13042-020-01200-9
|View full text |Cite
|
Sign up to set email alerts
|

A transductive transfer learning approach for image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…To demonstrate the efficiency of our VTL, the results of our experiments are compared with several unsupervised domain adaptation methods including EMFS (2018) [40], EasyTL (2019) [41], STJML (2020) [42], GEF (2019) [43], DWDA (2021) [44], CDMA (2020) [45], ALML (2022) [46], TTLC (2021) [33], SGA-MDAP (2020) [47], NSO (2020) [48], FSUTL (2020) [49], PLC (2021) [50], GSI (2021) [51] and ICDAV (2022) [52]. In the experiments, VTL begins with learning a domain invariant and class discriminative latent feature space according to Equation (18).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To demonstrate the efficiency of our VTL, the results of our experiments are compared with several unsupervised domain adaptation methods including EMFS (2018) [40], EasyTL (2019) [41], STJML (2020) [42], GEF (2019) [43], DWDA (2021) [44], CDMA (2020) [45], ALML (2022) [46], TTLC (2021) [33], SGA-MDAP (2020) [47], NSO (2020) [48], FSUTL (2020) [49], PLC (2021) [50], GSI (2021) [51] and ICDAV (2022) [52]. In the experiments, VTL begins with learning a domain invariant and class discriminative latent feature space according to Equation (18).…”
Section: Resultsmentioning
confidence: 99%
“…Joint geometrical and statistical alignment proposed in [31] tries to pseudo labeling target samples with a source domain classifier and aims to reduce the conditional distribution divergence by iteratively updating the pseudo-labels. The most related works to ours are domain invariant and class discriminative learning (DICD) [32] and transductive transfer learning for image classification (TTLC) [33], which map both the source and target domains into a shared feature space with the least marginal and conditional distribution differences.Both TTLC and DICD utilizing pseudo-labeling of target samples whiteout label correction paradigm so they e Interclass maximization is applied between different classes in both of the source and target domains. Also, VTL maximizes the inter-class distances between different classes across the source and target domains.…”
Section: Methods With Pseudo Labelingmentioning
confidence: 99%
“…Joint marginal and conditional distribution adaptation is a useful method in transfer learning, which measures the distribution shift between domains by a metric such as maximum mean discrepancy (MMD) [ 42 ]. In many joint distribution adaptation based approaches, the marginal and conditional distributions are often treated equally, which may not be optimal [ 9 , 34 , 43 ]. In the other words, if two domains are very dissimilar, it means that the marginal distributions have more discrepancy and need more notice to align; while if two domains are similar (i.e., the marginal distributions are close), it means that the conditional distributions needs more attention and should be given more weight.…”
Section: Introductionmentioning
confidence: 99%