2018
DOI: 10.48550/arxiv.1803.10123
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Task Agnostic Continual Learning Using Online Variational Bayes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
49
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(52 citation statements)
references
References 15 publications
0
49
0
Order By: Relevance
“…To evaluate the proposed L2P, we closely follow the settings proposed in prior works [31,52,61], and conduct comprehensive experiments. In particular, we consider (1) the class-incremental setting, where the task identity is unknown during inference; (2) the domain-incremental setting, where the input domain shifts over time; (3) the taskagnostic setting, where there is no clear task boundary.…”
Section: Methodsmentioning
confidence: 99%
“…To evaluate the proposed L2P, we closely follow the settings proposed in prior works [31,52,61], and conduct comprehensive experiments. In particular, we consider (1) the class-incremental setting, where the task identity is unknown during inference; (2) the domain-incremental setting, where the input domain shifts over time; (3) the taskagnostic setting, where there is no clear task boundary.…”
Section: Methodsmentioning
confidence: 99%
“…Existing works in CL have proposed a variety of CL settings. In this section we explain some of the major CL settings that CLEAR adopts and refer readers to [1,22,44,53,59] for more thorough discussion of different variants of CL setups.…”
Section: Continual Learning Settingsmentioning
confidence: 99%
“…In this paper, we also adopt task-based sequential learning with a sequence of (same) 11-way classification tasks by splitting the temporal stream into 11 buckets, each consisting of a labeled subset for training and evaluation. However, it could be argued that in real-world, the model will not be informed about the task boundary (also called boundary-agnostic [33], task-free [1], or task-agnostic CL [59]). Such boundary-agnostic settings have been explored in recent works [1,4,23,59], in which a non-iid data stream continuously spits out new samples without a notion of task switch.…”
Section: Continual Learning Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…The method is however impractical. (Zeno et al, 2018) uses a logit masking related to ours but their context is based on the multi-head setting of continual learning and their goal is to activate only the head of which the samples within the new batch belong to. However, our approach is 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000…”
Section: Related Workmentioning
confidence: 99%