2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00214
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Domain Incremental Learning for Semantic Segmentation

Abstract: Recent efforts in multi-domain learning for semantic segmentation attempt to learn multiple geographical datasets in a universal, joint model. A simple fine-tuning experiment performed sequentially on three popular road scene segmentation datasets demonstrates that existing segmentation frameworks fail at incrementally learning on a series of visually disparate geographical domains. When learning a new domain, the model catastrophically forgets previously learned knowledge. In this work, we pose the problem of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(22 citation statements)
references
References 50 publications
1
21
0
Order By: Relevance
“…Research on domainincremental learning for semantic segmentation is relatively sparse. Garg et al [19] propose a dynamic architecture that learns dedicated parameters to capture domain-specific features for each domain. Mirza et al [40] circumvent the issues of biased BatchNorm statistics by re-estimating and saving them for every domain, so that during inference domain-specific statistics can be used.…”
Section: Continual Learningmentioning
confidence: 99%
“…Research on domainincremental learning for semantic segmentation is relatively sparse. Garg et al [19] propose a dynamic architecture that learns dedicated parameters to capture domain-specific features for each domain. Mirza et al [40] circumvent the issues of biased BatchNorm statistics by re-estimating and saving them for every domain, so that during inference domain-specific statistics can be used.…”
Section: Continual Learningmentioning
confidence: 99%
“…We herein group them collectively under the umbrella of sequential UDA, and propose the taxonomy presented in Table 1 to clarify the differences among the individual settings. We denote the problem of learning multiple labeled target domains without retaining previously seen data as domain-incremental learning (DIL) [50], [51], [52], [53], [54], [55], [56], [57]. Another sub-category called source-free UDA -sometimes referred to as unsupervised model adaptation (UMA) -has recently gained increasing attention for both classification [58], [59], [60], [61] and segmentation tasks [62], [63], [64], [65], [66].…”
Section: Sequential Udamentioning
confidence: 99%
“…For what concerns methods designed ad hoc for SiS, different works consider the problem of learning different domains over the lifespan of a model (Wu et al, 2019;Porav et al, 2019;Garg et al, 2022). They face the domain-incremental problem by assuming that data from new domains come unlabeled -and, therefore, they are more connected to the DASiS literature where the typical task is unsupervised DA Wu et al (2019) propose to generate data that resembles that of the current target domain and to update the model's parameters relying on such samples.…”
Section: Domain-incremental Sismentioning
confidence: 99%
“…They build their method by using GANs, and the proposed approach does not require domain-specific finetuning. Instead, Garg et al (2022) learn domain-specific parameters for each new domain -in their case corresponding to different geographical regionswhereas other parameters are assumed to be domain-invariant.…”
Section: Domain-incremental Sismentioning
confidence: 99%