Proceedings of the 13th EAI International Conference on Mobile Multimedia Communications, Mobimedia 2020, 27-28 August 2020, Cy 2020
DOI: 10.4108/eai.27-8-2020.2296559
|View full text |Cite
|
Sign up to set email alerts
|

Interaction representation-based subspace learning for domain adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 7 publications
0
13
0
Order By: Relevance
“…Despite some trials of hyperparameters tuning, as shown in Table 1, their results were still comparable or worse than no fine-tuning. This may be due to the fact that most DA approaches were designed for classification tasks, which may not be directly applicable to our regression task [3]. This further verifies the value of our work.…”
Section: Volume-sampled Imagesmentioning
confidence: 54%
“…Despite some trials of hyperparameters tuning, as shown in Table 1, their results were still comparable or worse than no fine-tuning. This may be due to the fact that most DA approaches were designed for classification tasks, which may not be directly applicable to our regression task [3]. This further verifies the value of our work.…”
Section: Volume-sampled Imagesmentioning
confidence: 54%
“…Another disadvantage is that minimizing MMD on the instance representation has the risk of changing the feature scale, while regression tasks are fragile to feature scaling. Thus, Representation Subspace Distance (RSD) (Chen et al, 2021b) closes the domain shift through orthogonal bases of the representation spaces, which are free from feature scaling.…”
Section: Mk-mmdmentioning
confidence: 99%
“…1) Maximum Mean Discrepancy based methods. : The Maximum Mean Discrepancy (MMD) based [25]- [27] methods used the maximum mean discrepancy to measure and reduce the distance of extracted feature. Tzeng et.al [2] introduce the adaptation layer with an additional domain confuse loss to learn the domain-invariant representation.…”
Section: Related Workmentioning
confidence: 99%