Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI 2018
DOI: 10.18653/v1/w18-5708
|View full text |Cite
|
Sign up to set email alerts
|

Data Augmentation for Neural Online Chats Response Selection

Abstract: Data augmentation seeks to manipulate the available data for training to improve the generalization ability of models. We investigate two data augmentation proxies, permutation and flipping, for neural dialog response selection task on various models over multiple datasets, including both Chinese and English languages. Different from standard data augmentation techniques, our method combines the original and synthesized data for prediction. Empirical results show that our approach can gain 1 to 3 recall-at-1 p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 22 publications
0
11
0
Order By: Relevance
“…Such models are typically evaluated using Recall@k, a typical metric in information retrieval literature. This measures how often the correct response is identified as one of the top k ranked responses (Lowe et al, 2015;Inaba and Takahashi, 2016;Yu et al, 2016;Al-Rfou et al, 2016;Henderson et al, 2017;Lowe et al, 2017;Wu et al, 2017;Chaudhuri et al, 2018;Du and Black, 2018;Kumar et al, 2018;Zhou et al, 2018;Gunasekara et al, 2019;Tao et al, 2019). Models trained to select responses can be used to drive dialogue systems, question-answering systems, and response suggestion systems.…”
Section: Response Selection Taskmentioning
confidence: 99%
“…Such models are typically evaluated using Recall@k, a typical metric in information retrieval literature. This measures how often the correct response is identified as one of the top k ranked responses (Lowe et al, 2015;Inaba and Takahashi, 2016;Yu et al, 2016;Al-Rfou et al, 2016;Henderson et al, 2017;Lowe et al, 2017;Wu et al, 2017;Chaudhuri et al, 2018;Du and Black, 2018;Kumar et al, 2018;Zhou et al, 2018;Gunasekara et al, 2019;Tao et al, 2019). Models trained to select responses can be used to drive dialogue systems, question-answering systems, and response suggestion systems.…”
Section: Response Selection Taskmentioning
confidence: 99%
“…Response selection is also directly applicable to retrieval-based dialog systems, a popular and elegant approach to framing dialog (Wu et al, 2017;Weston et al, 2018;Mazaré et al, 2018;Gunasekara et al, 2019;Henderson et al, 2019b). 1 Response Selection is a task of selecting the most appropriate response given the dialog history (Wang et al, 2013;Al-Rfou et al, 2016;Du and Black, 2018;Chaudhuri et al, 2018). This task is central to retrieval-based dialog systems, which typically encode the context and a large collection of responses in a joint semantic space, and then retrieve the most relevant response by matching the query representation against the encodings of each candidate response.…”
Section: Introductionmentioning
confidence: 99%
“…There exists a scarcity of the data required to train a dialog system for most tasks. Various methods have been proposed to tackle this issue including paraphrase techniques to generate artificial training data (Kumar et al, 2021;Du and Black, 2018), generating annotations including intent-slots and dialog acts (Yoo et al, 2019(Yoo et al, , 2020a or even injecting noise to improve robustness in dialog act prediction for ASR data (Wang et al, 2020).…”
Section: Introductionmentioning
confidence: 99%