2021
DOI: 10.48550/arxiv.2106.15827
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

When Video Classification Meets Incremental Classes

Abstract: With the rapid development of social media, tremendous videos with new classes are generated daily, which raise an urgent demand for video classification methods that can continuously update new classes while maintaining the knowledge of old videos with limited storage and computing resources. In this paper, we summarize this task as Class-Incremental Video Classification (CIVC) and propose a novel framework to address it. As a subarea of incremental learning tasks, the challenge of catastrophic forgetting is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 62 publications
0
5
0
Order By: Relevance
“…They thus proposed a method named Distillation and Retrospection to obtain a better balance between preservation of previous tasks and adaptation to new tasks. In video classification, Zhao et al [186] adapted the knowledge distillation based method by separately distilling the spatial and temporal knowledge. They also introduced a dual granularity exemplar selection method to store only key frames of representative video instances from the previous tasks.…”
Section: Combination Of Knowledge Distillation and Memory Based Methodsmentioning
confidence: 99%
“…They thus proposed a method named Distillation and Retrospection to obtain a better balance between preservation of previous tasks and adaptation to new tasks. In video classification, Zhao et al [186] adapted the knowledge distillation based method by separately distilling the spatial and temporal knowledge. They also introduced a dual granularity exemplar selection method to store only key frames of representative video instances from the previous tasks.…”
Section: Combination Of Knowledge Distillation and Memory Based Methodsmentioning
confidence: 99%
“…Despite the growing interest in CL in the image domain, the first three works to report results on video data were only recently published. Zhao et al [44] proposed a spatio-temporal knowledge transfer strategy to mitigate catastrophic forgetting. A concurrent work [26] estimated the subset of feature channels that contributes the most to the predictions of the previous tasks, Although existing works bootstrapped the study of continual learning in video data, all of them use different evaluation protocols, making direct comparisons between methods difficult.…”
Section: Related Workmentioning
confidence: 99%
“…With such large volumes of data, it is important to develop models that can effectively learn from continuous streams of untrimmed video data. Remarkably, few research efforts have addressed continual learning with video [24,26,44]. Despite these current works, video continual learning methods still show a large variability in their experimental protocols, making direct comparisons hard to establish.…”
Section: Introductionmentioning
confidence: 99%
“…In the computer vision community, metric learning [19,29,42,59] is widely used for learning discriminative feature representations.…”
Section: Metric Learningmentioning
confidence: 99%