2022
DOI: 10.48550/arxiv.2212.04145
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…(Wang et al 2022a) serves as the first approach to tackle this task, using a combination of bi-average pseudo labels and stochastic weight reset. While (Wang et al 2022a;Song et al 2023) addresses the continual shifts at the model level, (Gan et al 2022a) leverages visual domain prompts to address the problem in the classification task at the input level for the first time. In this paper, we evaluate our approach on both TTA and CTTA with a specific focus on the dense prediction task.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…(Wang et al 2022a) serves as the first approach to tackle this task, using a combination of bi-average pseudo labels and stochastic weight reset. While (Wang et al 2022a;Song et al 2023) addresses the continual shifts at the model level, (Gan et al 2022a) leverages visual domain prompts to address the problem in the classification task at the input level for the first time. In this paper, we evaluate our approach on both TTA and CTTA with a specific focus on the dense prediction task.…”
Section: Related Workmentioning
confidence: 99%
“…The Thirty-Eighth AAAI Conference on Artificial Intelligence cent advances of prompting in NLP (Li and Liang 2021;Liu et al 2023), VDP (Gan et al 2022a) first introduces a prompt-based method to tackle the classification TTA problem. It employs image-level prompts to enhance domain transfer efficiency and effectiveness.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, CoTTA proposes to use the moving teacher model and augmentation-average predictions for noise suppression and the model stochastic restoration to avoid catastrophic forgetting. Following the scheme of CoTTA, some recent works [11,12,32] have addressed CTTA from different perspectives. Specifically, [12] leverages the temporal correlations of streamed input by reservoir sampling and instance-aware batch normalization.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, [12] leverages the temporal correlations of streamed input by reservoir sampling and instance-aware batch normalization. [11] proposes domain-specific prompts and domainagnostic prompts to preserve domain-specific and domainshared knowledge, respectively. EATA [32] performs adaptation on non-redundant samples for an efficient update.…”
Section: Related Workmentioning
confidence: 99%