Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413516
|View full text |Cite
|
Sign up to set email alerts
|

Pairwise Similarity Regularization for Adversarial Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
2

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…Such models are most commonly used for image recognition and classification purposes. Thereby, they usually contain some FC layers at the very end of the model (Wang et al, 2020) and are often referred to as encoders. But we do not want our model to produce just one or a vector of values for a given input.…”
Section: Choice Of Model Architecturementioning
confidence: 99%
“…Such models are most commonly used for image recognition and classification purposes. Thereby, they usually contain some FC layers at the very end of the model (Wang et al, 2020) and are often referred to as encoders. But we do not want our model to produce just one or a vector of values for a given input.…”
Section: Choice Of Model Architecturementioning
confidence: 99%
“…Wang et al. [9] introduce a pairwise similarity regularisation method based on adversarial learning, which exploits the clustering structure of the target domain and minimises the difference between the clustered partitions and the pseudo‐predicted pairwise similarities. Wang et al.…”
Section: Related Workmentioning
confidence: 99%
“…Later, Long et al [8] come up with a principle framework conditioned on setting an adversarial adaptation model based on the discriminatory information conveyed in the classifier predictions. Wang et al [9] introduce a pairwise similarity regularisation method based on adversarial learning, which exploits the clustering structure of the target domain and minimises the difference between the clustered partitions and the pseudo-predicted pairwise similarities. Wang et al [4] propose an interchangeable batch normalisation, a novel module for UDA, to effectively fuse different channels for UDA.…”
Section: Vanilla Unsupervised Domain Adaptationmentioning
confidence: 99%
See 1 more Smart Citation
“…To overcome the above problem, the research community has made great efforts to address domain shift. Specifically, unsupervised domain adaptation (UDA) receives a lot of attention [8,26,44] since it requires no label information in target domain. The latest work [10,47] based on deep learning has achieved remarkable results.…”
Section: Introductionmentioning
confidence: 99%