Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.590
|View full text |Cite
|
Sign up to set email alerts
|

Continual Learning in Task-Oriented Dialogue Systems

Abstract: Continual learning in task-oriented dialogue systems allows the system to add new domains and functionalities over time after deployment, without incurring the high cost of retraining the whole system each time. In this paper, we propose a first-ever continual learning benchmark for task-oriented dialogue systems with 37 domains to be learned continuously in both modularized and end-to-end learning settings. In addition, we implement and compare multiple existing continual learning baselines, and we propose a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
53
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 38 publications
(54 citation statements)
references
References 16 publications
1
53
0
Order By: Relevance
“…To evaluate the overall performance on all tasks, we use the mean of all tasks' performance score following Sun et al (2019); Mi et al (2020); Madotto et al (2021). For each scenario (similar tasks and dissimilar tasks), we report the average of mean scores on all sequences as an overall metric.…”
Section: Results and Analysismentioning
confidence: 99%
See 4 more Smart Citations
“…To evaluate the overall performance on all tasks, we use the mean of all tasks' performance score following Sun et al (2019); Mi et al (2020); Madotto et al (2021). For each scenario (similar tasks and dissimilar tasks), we report the average of mean scores on all sequences as an overall metric.…”
Section: Results and Analysismentioning
confidence: 99%
“…Motivated by prior continual sequence generation work (Madotto et al, 2021) that uses Adapter (Houlsby et al, 2019) to insert new adapter module into every transformer layer for each new coming task, we propose to strategically decide whether we can reuse some adapter modules from old tasks before training on each new coming task, in a twostage manner: decision stage and training stage, where the former determines the architecture for new tasks and the later trains the model.…”
Section: Two-stage Methodsmentioning
confidence: 99%
See 3 more Smart Citations