2021
DOI: 10.48550/arxiv.2112.02714
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks

Abstract: This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks in a particular CL setting called domain incremental learning (DIL). Each task is from a different domain or product. The DIL setting is particularly suited to ASC because in testing the system needs not know the task/domain to which the test data belongs. To our knowledge, this setting has not been studied before for ASC. This paper proposes a novel model called CLASSIC. The key novelty is a contrastive con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…The prompt-based model is better than the adapter in both datasets. Consistent with Liu et al (2021b), p-tuning v2 uses CLS token classification output better than using verbalizer under multi-task learning. The performance of multi-task learning is usually considered the upper limit of the CL model.…”
Section: Main Experimentsmentioning
confidence: 98%
See 3 more Smart Citations
“…The prompt-based model is better than the adapter in both datasets. Consistent with Liu et al (2021b), p-tuning v2 uses CLS token classification output better than using verbalizer under multi-task learning. The performance of multi-task learning is usually considered the upper limit of the CL model.…”
Section: Main Experimentsmentioning
confidence: 98%
“…Instead of searching for discrete template words, Li and Liang (2021) propose prefix-tuning, where tokens with trainable continuous embeddings are placed at the beginning of the text to perform generate tasks. Ptuning v2 (Liu et al, 2021b) also uses soft-prompt to achieve promising natural language understanding and knowledge probing tasks. Different from the above methods, they studied single-step adaptation, and we are interested in prompt transfer in CL environment.…”
Section: Prompt-based Tuningmentioning
confidence: 99%
See 2 more Smart Citations
“…In a noteworthy contribution, Jin et al (2021) deployed distillationbased techniques for the continuous incremental pre-training of language models across diverse domain corpora. In the context of sentiment analysis, Ke et al (2021) explored aspect-based sentiment analysis tasks across different domains through contrastive continual learning. However, these approaches often fail to address the temporal influence, as observed by Luu et al (2021), where data drift over time can negatively impact model performance.…”
Section: Continual Learningmentioning
confidence: 99%