Domain Adaptation in Computer Vision With Deep Learning 2020
DOI: 10.1007/978-3-030-45529-3_4
|View full text |Cite
|
Sign up to set email alerts
|

Deep Hashing Network for Unsupervised Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…Traditional TL benchmarks have more balanced ratios between source and target domains. For example, in the office dataset, the ratios range between 1.87 to 6.59; 42 for the extended office-Home dataset, the ratios are close to 1 43 , and for the Photo-Art-Cartoon-Sketch (PACS) dataset the ratios range from 1.14 to 2.35. 44 In the presented Smart Maintenance Living Lab (SMLL) dataset, the ratio ranges from 8.6 to 21.6, meaning less data is available for each target domain.…”
Section: Transfer Learning In Pdm Tasksmentioning
confidence: 97%
“…Traditional TL benchmarks have more balanced ratios between source and target domains. For example, in the office dataset, the ratios range between 1.87 to 6.59; 42 for the extended office-Home dataset, the ratios are close to 1 43 , and for the Photo-Art-Cartoon-Sketch (PACS) dataset the ratios range from 1.14 to 2.35. 44 In the presented Smart Maintenance Living Lab (SMLL) dataset, the ratio ranges from 8.6 to 21.6, meaning less data is available for each target domain.…”
Section: Transfer Learning In Pdm Tasksmentioning
confidence: 97%
“…With relevant insights from [57], [58], we extend our hashingbased SC approach for domain adaptation that includes unsupervised domain adaptation between the sender's and receiver's knowledge base, semantic extraction at the sender, and unsupervised hashing for the receiver. We employ Multi-Kernel Maximum Mean Discrepancy (MKMMD) [58] to quantify the distribution difference between sender and receiver datasets in a reproducing-kernel Hilbert space which help reducing knowledge base disparity through nonlinear data alignment. In particular, with the extended framework, we seek to minimize (slight abuse of notations to distinguish sender and receiver and extending the ideas from [58])…”
Section: B With Domain Adaptive Hashing (Dah)mentioning
confidence: 99%
“…We employ Multi-Kernel Maximum Mean Discrepancy (MKMMD) [58] to quantify the distribution difference between sender and receiver datasets in a reproducing-kernel Hilbert space which help reducing knowledge base disparity through nonlinear data alignment. In particular, with the extended framework, we seek to minimize (slight abuse of notations to distinguish sender and receiver and extending the ideas from [58])…”
Section: B With Domain Adaptive Hashing (Dah)mentioning
confidence: 99%
See 1 more Smart Citation
“…Combined with a new distance loss named maximum density divergence, Zhang et al. [9] propose an adversarial tight match model to improve the ability of the domain adaptation by adversarial training and metric learning.…”
Section: Introductionmentioning
confidence: 99%