2021
DOI: 10.1609/aaai.v35i18.18005
|View full text |Cite
|
Sign up to set email alerts
|

Dialog Router: Automated Dialog Transition via Multi-Task Learning

Abstract: Dialog Router is a general paradigm for human-bot symbiosis dialog systems to provide friendly customer care service. It is equipped with a multi-task learning model to automatically capture the underlying correlation between multiple related tasks, i.e. dialog classification and regression, and greatly reduce human labor work for system customization, which improves the accuracy of dialog transition. In addition, for learning the multi-task model, the training data and labels are easy to collect from human-to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…This method cannot sufficiently leverage the underlying knowledge within subtasks and various kinds of dialog data. Another similar work is a demo system for dialog transition (Huang et al, 2021) which tries a vanilla multi-task learning method (i.e., a dialog encoder tailed with two prediction subtasks). However, it still hardly fully utilizes the knowledge among dialog data and subtasks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This method cannot sufficiently leverage the underlying knowledge within subtasks and various kinds of dialog data. Another similar work is a demo system for dialog transition (Huang et al, 2021) which tries a vanilla multi-task learning method (i.e., a dialog encoder tailed with two prediction subtasks). However, it still hardly fully utilizes the knowledge among dialog data and subtasks.…”
Section: Related Workmentioning
confidence: 99%
“…• Joint-CNN-BiRNN-Att (Joint-CBA) is a multi-task model by extending the last layer of CBA model to simultaneously predict NPS and dialog categories. • Vanilla Multi-task Model (VMM) follows the vanilla multi-task learning paradigm and uses the standard BERT encoder to encode only the utterance data (Huang et al, 2021), without any gated-mechanism modules.…”
Section: Baselinesmentioning
confidence: 99%