Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume 2021
DOI: 10.18653/v1/2021.eacl-main.308
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks

Abstract: Although coherence modeling has come a long way in developing novel models, their evaluation on downstream applications for which they are purportedly developed has largely been neglected. With the advancements made by neural approaches in applications such as machine translation (MT), summarization and dialog systems, the need for coherence evaluation of these tasks is now more crucial than ever. However, coherence models are typically evaluated only on synthetic tasks, which may not be representative of thei… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 37 publications
(50 reference statements)
1
8
0
Order By: Relevance
“…On LMVLM, the UNC model has a better performance; we suspect that its explicit conditional language modeling loss might provide an additional advantage for this particular task. Overall, our results are consistent with observations fromMohiuddin et al (2021) that show poor generalizability in the previous SOTA model.…”
supporting
confidence: 91%
See 4 more Smart Citations
“…On LMVLM, the UNC model has a better performance; we suspect that its explicit conditional language modeling loss might provide an additional advantage for this particular task. Overall, our results are consistent with observations fromMohiuddin et al (2021) that show poor generalizability in the previous SOTA model.…”
supporting
confidence: 91%
“…We report the results obtained by Mohiuddin et al (2021) and Pishdad et al (2020) on their evaluation tasks for SOTA neural coherence models in Table 6. Mesgar & Strube (2018).…”
Section: A5 Comparison Of Existing State-of-the-art Coherence Modelsmentioning
confidence: 99%
See 3 more Smart Citations