2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00120
|View full text |Cite
|
Sign up to set email alerts
|

Continual Learning of Object Instances

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Due to its effectiveness and flexibility, the regularization methods have been widely explored for tasks such as image classification [2][3] [6], but are less-explored for image retrieval. Recently, Parshotam et al [7] regularize the representations via a normalized cross-entropy loss, training with metric learning for vehicle identification and retrieval. Chen et al [8] propose regularizing both the representations and probabilities by combining a maximum mean discrepancy loss and a knowledge distillation loss [9] for fine-grained image retrieval (FGIR) [10].…”
Section: Feature Estimations Based Correlation Distillation For Incre...mentioning
confidence: 99%
See 1 more Smart Citation
“…Due to its effectiveness and flexibility, the regularization methods have been widely explored for tasks such as image classification [2][3] [6], but are less-explored for image retrieval. Recently, Parshotam et al [7] regularize the representations via a normalized cross-entropy loss, training with metric learning for vehicle identification and retrieval. Chen et al [8] propose regularizing both the representations and probabilities by combining a maximum mean discrepancy loss and a knowledge distillation loss [9] for fine-grained image retrieval (FGIR) [10].…”
Section: Feature Estimations Based Correlation Distillation For Incre...mentioning
confidence: 99%
“…To these methods, we further train a classifier on the top of the embedding net so that these two methods can be reproduced correctly. NCE loss [7] regularizes the inner product of an anchor-positive feature pair In terms of the plasticity factor λ 2 , we tune this factor for four methods in incremental FGIR until we get their optimal performance. As a result, the corresponding plasticity factors are tuned as 8000, 0.2, 10, and 0.1, respectively.…”
Section: One-task Scenario Evaluationmentioning
confidence: 99%