Proceedings of the 29th ACM International Conference on Multimedia 2021
DOI: 10.1145/3474085.3475306
|View full text |Cite
|
Sign up to set email alerts
|

Co-Transport for Class-Incremental Learning

Abstract: Traditional learning systems are trained in closed-world for a fixed number of classes, and need pre-collected datasets in advance. However, new classes often emerge in real-world applications and should be learned incrementally. For example, in electronic commerce, new types of products appear daily, and in a social media community, new topics emerge frequently. Under such circumstances, incremental models should learn several new classes at a time without forgetting. We find a strong correlation between old … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 49 publications
(25 citation statements)
references
References 55 publications
1
24
0
Order By: Relevance
“…The common design results in highly different magnitudes of the separate classifier heads' outputs, since each of which is optimized by one specific loss function. A similar phenomena was studied in Hou et al (2019); Zhou et al (2021), which is rehearsal-based methods with all the classifier heads being unified together. It discovers that magnitudes of both the weights and the biases of the linear classifier for the new session's classes are significantly higher than those for the classes for the old sessions.…”
Section: Rescuing Collapsed/failed Methods For Cdd3mentioning
confidence: 74%
“…The common design results in highly different magnitudes of the separate classifier heads' outputs, since each of which is optimized by one specific loss function. A similar phenomena was studied in Hou et al (2019); Zhou et al (2021), which is rehearsal-based methods with all the classifier heads being unified together. It discovers that magnitudes of both the weights and the biases of the linear classifier for the new session's classes are significantly higher than those for the classes for the old sessions.…”
Section: Rescuing Collapsed/failed Methods For Cdd3mentioning
confidence: 74%
“…Discussion about related compatible training methods: Some other works aim to build a compact embedding space [66], which can be seen as enhancing forward compatibility implicitly. For example, [67] seeks to detect new classes by learning placeholders, [27] utilizes the embedding with large margin between classes, [34] encourages classwise orthogonality for more compact embedding.…”
Section: Methodsmentioning
confidence: 99%
“…Metric-based algorithms utilize a pretrained backbone for feature extraction, and employ proper distance metrics between support and query instances [26,39,41,47,54,55,59]. Class-Incremental Learning (CIL): aims to learn from a sequence of new classes without forgetting old ones, which is now widely discussed in various computer vision tasks [13,53,64,66]. Current CIL algorithms can be roughly divided into three groups.…”
Section: Related Workmentioning
confidence: 99%
“…Class incremental learning also takes advantage of KD in a cross-task scenario, where non-overlapped sets of classes arrive sequentially. The classifier on previously seen classes is the teacher, which is incorporated in training the current stage's student without storing historical data [24], [9], [43], [61]. KD helps avoid catastrophic forgetting by matching the student's predictions over previous classes with the teacher [62], [52].…”
Section: Related Workmentioning
confidence: 99%