2020
DOI: 10.5715/jnlp.27.677
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Dialogue Generation Systems via Response Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 3 publications
0
1
0
Order By: Relevance
“…For instance, ADEM can't discriminate between gold responses and certain classes of adversarial negatives e.g reversed gold responses and repeating the context as the response (Sai et al, 2019). Sato et al (2020) evaluate dialog systems through their ability at selecting valid responses from a semi-automatically curated candidate list. Mehri and Eskenazi (2020b) introduce the unsupervised, reference-free USR metric, which leverages a suite of RoBERTa (Liu et al, 2019) models, each finetuned to score one of five dialog aspects e.g Natural and Uses Knowledge.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, ADEM can't discriminate between gold responses and certain classes of adversarial negatives e.g reversed gold responses and repeating the context as the response (Sai et al, 2019). Sato et al (2020) evaluate dialog systems through their ability at selecting valid responses from a semi-automatically curated candidate list. Mehri and Eskenazi (2020b) introduce the unsupervised, reference-free USR metric, which leverages a suite of RoBERTa (Liu et al, 2019) models, each finetuned to score one of five dialog aspects e.g Natural and Uses Knowledge.…”
Section: Related Workmentioning
confidence: 99%