Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1191
|View full text |Cite
|
Sign up to set email alerts
|

Improving Open-Domain Dialogue Systems via Multi-Turn Incomplete Utterance Restoration

Abstract: In multi-turn dialogue, utterances do not always take the full form of sentences. These incomplete utterances will greatly reduce the performance of open-domain dialogue systems. Restoring more incomplete utterances from context could potentially help the systems generate more relevant responses. To facilitate the study of incomplete utterance restoration for open-domain dialogue systems, a large-scale multi-turn dataset Restoration-200K 1 is collected and manually labeled with the explicit relation between an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
92
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 44 publications
(92 citation statements)
references
References 20 publications
0
92
0
Order By: Relevance
“…(iii) EM stands for the exact match ac- ) 73.2 64.6 68.6 59.5 53.0 56.0 50.7 45.1 47.7 92.3 89.6 92.4 85.1 Table 2: The experimental results of (Top) general and (Bottom) BERT-based results on MULTI. †: Results from Pan et al (2019). A bolded number in a column indicates a statistically significant improvement against all the baselines (p < 0.05), whereas underline numbers show comparable performances.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…(iii) EM stands for the exact match ac- ) 73.2 64.6 68.6 59.5 53.0 56.0 50.7 45.1 47.7 92.3 89.6 92.4 85.1 Table 2: The experimental results of (Top) general and (Bottom) BERT-based results on MULTI. †: Results from Pan et al (2019). A bolded number in a column indicates a statistically significant improvement against all the baselines (p < 0.05), whereas underline numbers show comparable performances.…”
Section: Methodsmentioning
confidence: 99%
“…(ii) Transformer-based models consist of the basic transformer model (T-Gen) (Vaswani et al, 2017), the transformer-based pointer network (T-Ptr), and the transformer-based pointer generator (T-Ptr-Gen). (iii) State-of-theart models consist of Syntactic (Kumar and Joshi, 2016), PAC (Pan et al, 2019), GECOR (Quan et al, 2019), L-Ptr-λ and T-Ptr-λ (Su et al, 2019). We refer readers to their papers for more details.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations