2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00906
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 28 publications
0
12
0
Order By: Relevance
“…The white-box FGSM [11] attack with perturbation budget of = 3 is used for both AT and testing. Following the practice of [19], [20], we assume that attackers have the labels of the target dataset to generate adversarial examples. The rationale behind these settings is that (i) most existing UDA approaches [6], [8] are based on DANN's key idea; (ii) the white-box threat model has been considered a standard evaluation protocol for defenses [15], [17], [18], [23].…”
Section: Exploring At For Udamentioning
confidence: 99%
See 2 more Smart Citations
“…The white-box FGSM [11] attack with perturbation budget of = 3 is used for both AT and testing. Following the practice of [19], [20], we assume that attackers have the labels of the target dataset to generate adversarial examples. The rationale behind these settings is that (i) most existing UDA approaches [6], [8] are based on DANN's key idea; (ii) the white-box threat model has been considered a standard evaluation protocol for defenses [15], [17], [18], [23].…”
Section: Exploring At For Udamentioning
confidence: 99%
“…A naive way is to produce pseudo labels y t using an external pre-trained UDA model. ASSUDA [20] resorts to this idea and applies it to the UDA semantic segmentation problem. Note that ASSUDA only evaluates black-box robustness.…”
Section: A Conventional At On Udamentioning
confidence: 99%
See 1 more Smart Citation
“…, the tendency of the classifier to collapse into the classes that are more represented, in contrast with the long tail of the most underrepresented ones) and conduct adversarial training on the segmentation network to improve its robustness. Yang et al [219] study the adversarial vulnerability of existing DA-SiS methods and propose the adversarial self-supervision UDA, where the objective is to maximize the proximity between clean images and their adversarial counterparts in the output space -by using a contrastive loss. Huang et al [78] propose a Fourier adversarial training method, where the pipeline is (i) generating adversarial samples by perturbing certain high frequency components that do not carry significant semantic information and (ii) using them to train the model.…”
Section: Entropy Minimization Of Target Predictions (Tem)mentioning
confidence: 99%
“…The model is trained end-to-end using co-evolving pseudo-labels -using a momentum network (a copy of the original model that evolves slowly) -and maintaining an exponentially moving class prior, which is used to discount the confidence thresholds for classes with few samples, in order to increase their relative contribution to the training loss. Also, Yang et al [219] -as mentioned in the previous paragraph -exploit self-supervision in DASiS by minimizing the distance between clean and adversarial samples in the output space via a contrastive loss.…”
Section: Entropy Minimization Of Target Predictions (Tem)mentioning
confidence: 99%