2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00531
|View full text |Cite
|
Sign up to set email alerts
|

Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(41 citation statements)
references
References 30 publications
0
41
0
Order By: Relevance
“…The formal CL concept is first proposed by Bengio et al [7] with experiments on supervised visual and language learning [8][9][10][11][12]. After them, many methods are proposed to pursue generalization improvement or convergence speedup with the spirit of training from a sequence of easy-to-hard data.…”
Section: Related Work 21 Curriculum Learning (Cl)mentioning
confidence: 99%
“…The formal CL concept is first proposed by Bengio et al [7] with experiments on supervised visual and language learning [8][9][10][11][12]. After them, many methods are proposed to pursue generalization improvement or convergence speedup with the spirit of training from a sequence of easy-to-hard data.…”
Section: Related Work 21 Curriculum Learning (Cl)mentioning
confidence: 99%
“…In multi-target domain adaptation (MTDA) the goal is learning from a single labeled source domain with the aim of performing well on multiple target domains -at the same time. To tackle MTDA within an image classification context, standard UDA approaches can be directly extended to multiple targets [54,32,155,137].…”
Section: Multi-target Dasismentioning
confidence: 99%
“…This is a very difficult problem to solve as one has to overcome both the category and domain gap. Graphs Neural Networks (GNN) based approaches [3][4][5][6] have gained traction in this field due to their ability to find relations in unstructured data. It also helps to represent the domains in a unified subspace.…”
Section: Introductionmentioning
confidence: 99%
“…Recent methods [4][5][6] involving GNNs have gained popularity owing to their transductive ability for semantic propagation of related samples among multiple domains. Roy et al [3] proposes using graph convolution networks in 1SmT setting, along with co-curriculum teaching to handle noisy pseudo-labels. In our novel setting, we propose the use of attentional graph neural network [50] by passing messages between the source and target domains to aggregate them in a unified space.…”
Section: Introductionmentioning
confidence: 99%