2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00130
|View full text |Cite
|
Sign up to set email alerts
|

Noise-based Selection of Robust Inherited Model for Accurate Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…We test the 9+1 single-head accuracy on each class with different memory budgets (Table 3). Compared with (Du et al 2020) which used memory budget of 2,000 IDD samples with the accuracy <90%, our proposed method achieves 91-96% accuracy in any class with only 1,000 IDD samples and competitive performance even with 500 IDD samples.…”
Section: One-class Learning With the Ood Detectormentioning
confidence: 97%
“…We test the 9+1 single-head accuracy on each class with different memory budgets (Table 3). Compared with (Du et al 2020) which used memory budget of 2,000 IDD samples with the accuracy <90%, our proposed method achieves 91-96% accuracy in any class with only 1,000 IDD samples and competitive performance even with 500 IDD samples.…”
Section: One-class Learning With the Ood Detectormentioning
confidence: 97%
“…We test the 9+1 single-head accuracy on each class with different memory budgets (Table 4). Compared with (Du et al 2020) which used memory budget of 2,000 IDD samples with the accuracy <90%, our proposed method achieves 91-96% accuracy in any class with only 1,000 IDD samples and competitive performance even with 500 IDD samples.…”
Section: W I T H R E -I N I T I a L I Z A T I O N W I T H O U T R E -...mentioning
confidence: 97%
“…To address this, in this work, we propose a new metric, model stability, from the loss landscape to help shed light on accuracy under variations and model compression and guide an algorithmic solution that mitigates the loss. The model stability is visualized by the loss landscape and evaluated by the roughness score [16]. A lower roughness score indicates a smoother loss landscape and a more stable model.…”
Section: Q U a N T I Z A T I O N 8 -B I T T E R N A R Y R R A M W R I...mentioning
confidence: 99%
“…Given a trained DNN model, its model stability is an intrinsic property to withstand perturbations, such as variations in model weights and input noise. Model stability of a DNN, i.e., the DNN generalization capability, is directly related to the contour of the loss function [16,[19][20][21]. A flatter contour of the loss function leads to a larger region of acceptable minima, which allows the DNN model to better tolerate variations in both weights and inputs.…”
Section: Dnn Model Stabilitymentioning
confidence: 99%