Proceedings of the 1st Workshop on Document-Grounded Dialogue and Conversational Question Answering (DialDoc 2021) 2021
DOI: 10.18653/v1/2021.dialdoc-1.12
|View full text |Cite
|
Sign up to set email alerts
|

Document-Grounded Goal-Oriented Dialogue Systems on Pre-Trained Language Model with Diverse Input Representation

Abstract: Document-grounded goal-oriented dialog system understands users' utterances, and generates proper responses by using information obtained from documents. The Dialdoc21 shared task consists of two subtasks; subtask1, finding text spans associated with users' utterances from documents, and subtask2, generating responses based on information obtained from subtask1. In this paper, we propose two models (i.e., a knowledge span prediction model and a response generation model) for the sub-task1 and the subtask2. In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Response generation then aims at generating a proper agent response according to the dialogue context and the selected knowledge. Therefore, one straightforward solution for this problem is to use two models to conduct KI and RG in a pipeline manner (Daheim et al, 2021;Kim et al, 2021;Xu et al, 2021;Chen et al, 2021;. However, such pipeline methods fail to capture the interdependence between KI and RG.…”
Section: Dialogue Contextmentioning
confidence: 99%
“…Response generation then aims at generating a proper agent response according to the dialogue context and the selected knowledge. Therefore, one straightforward solution for this problem is to use two models to conduct KI and RG in a pipeline manner (Daheim et al, 2021;Kim et al, 2021;Xu et al, 2021;Chen et al, 2021;. However, such pipeline methods fail to capture the interdependence between KI and RG.…”
Section: Dialogue Contextmentioning
confidence: 99%
“…Most of them are based on pre-trained LMs. For example, Kim et al (2021) used a RoBERTa and BART model to predict slot values and generate responses, respectively. Their study employed a teacher model to score examples (using F1-score and BLEU) from easier to harder.…”
Section: Curriculum Learningmentioning
confidence: 99%
“…These easier examples are not restricted to costly data acquisition, e.g., our costs were marginally none. (Bengio et al, 2009) x (Pentina et al, 2015) x (Narvekar et al, 2016) x (Saito, 2018) x (Fang et al, 2019) x (Foglino and Leonetti, 2019) x x x (Kim et al, 2021) x (Dai et al, 2021) x (Zhu et al, 2021b) x (Zhao et al, 2022) x CTL (ours)…”
Section: Curriculum Learningmentioning
confidence: 99%