2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00672
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-guided Model Generalization to Unseen Domains

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(15 citation statements)
references
References 22 publications
0
15
0
Order By: Relevance
“…[22], [23], [31], [34], [72], [73], [95], [96], [164], [165], [166] Data Augmentation ( § 3.3) -Hand-Engineered Transformations [32], [33], [65], [86], [167], [168], [169], [170] -Gradient-Based Augmentation [24], [40], [45], [171] -Model-Based Augmentation [19], [25], [26], [137], [172], [173], [174], [175], [176] -Feature-Based Augmentation [27], [86], [177], [178] Ensemble Learning ( § 3.4) -Exemplar-SVMs [47], [50], [179] -Domain-Specific Neural Networks [104], [138], [180], [181], [182] -Domain-Specific Batch Normalization [183], [184], [185], [186] -Weight Averaging [187] Network Architecture Design ( § 3.5) -Exploiting Instance Normalization [39], [57], [58], [59], [188] -Problem-Specific Modules ...…”
Section: Action Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…[22], [23], [31], [34], [72], [73], [95], [96], [164], [165], [166] Data Augmentation ( § 3.3) -Hand-Engineered Transformations [32], [33], [65], [86], [167], [168], [169], [170] -Gradient-Based Augmentation [24], [40], [45], [171] -Model-Based Augmentation [19], [25], [26], [137], [172], [173], [174], [175], [176] -Feature-Based Augmentation [27], [86], [177], [178] Ensemble Learning ( § 3.4) -Exemplar-SVMs [47], [50], [179] -Domain-Specific Neural Networks [104], [138], [180], [181], [182] -Domain-Specific Batch Normalization [183], [184], [185], [186] -Weight Averaging [187] Network Architecture Design ( § 3.5) -Exploiting Instance Normalization [39], [57], [58], [59], [188] -Problem-Specific Modules ...…”
Section: Action Recognitionmentioning
confidence: 99%
“…Combining feature-level augmentation with image-level augmentation has been studied in [86] for zeroshot DG, where Mixup [214] is applied to mixing instances from different domains at both the image and feature space. In [178], a neural network is learned via a worstcase loss [171] to generate feature-level perturbations, and a learnable Mixup is proposed to mix the perturbed and original instances in both feature and label space.…”
Section: Feature-based Augmentationmentioning
confidence: 99%
“…disentangle domain-specific and domain-invariant information [2,3,16,28], or align feature distributions of different domains while preserving their semantics [1,17,22,43]. Typical approaches for single DG simulate the presence of new domains with data augmentation either through adversarial strategies [9,18,25,26,33,41] or direct input transformation [32]. For instance, [33] performs adversarial data augmentation under a worst case formulation, assuming samples of unseen domains to be close to the training distribution.…”
Section: Related Workmentioning
confidence: 99%
“…For example, DDAIG developed by Zhou et al [48] learns a neural network to transform images' appearance such that a domain classifier cannot identify their source domain labels. The last group transitions from pixel-to feature-level augmentation by, e.g., mixing feature statistics [49] or learning feature perturbation networks [28]. Most existing DG methods cannot handle unlabeled data except those based on self-supervised learning [6,39].…”
Section: Ablation Study and Analysismentioning
confidence: 99%