Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.491
|View full text |Cite
|
Sign up to set email alerts
|

Reference and Document Aware Semantic Evaluation Methods for Korean Language Summarization

Abstract: Text summarization refers to the process that generates a shorter form of text from the source document preserving salient information. Many existing works for text summarization are generally evaluated by using recall-oriented understudy for gisting evaluation (ROUGE) scores. However, as ROUGE scores are computed based on n-gram overlap, they do not reflect semantic meaning correspondences between generated and reference summaries. Because Korean is an agglutinative language that combines various morphemes in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 39 publications
(38 reference statements)
0
8
0
Order By: Relevance
“…In particular, languages in which ngram-based metrics perform poorly due to the language's structure (e.g. Korean (Lee et al, 2020)) would benefit the most from our approach).…”
Section: Resultsmentioning
confidence: 99%
“…In particular, languages in which ngram-based metrics perform poorly due to the language's structure (e.g. Korean (Lee et al, 2020)) would benefit the most from our approach).…”
Section: Resultsmentioning
confidence: 99%
“…This has shown to be effective in various tasks including review reading comprehension (Xu et al 2019) and SuperGLUE (Wang et al 2019a). Existing works on multiturn response selection (Whang et al 2020;Gu et al 2020;Humeau et al 2020) also adapted this post-training approach and obtained state-of-the-art results. We also employ this post-training method in this work and show its effectiveness in improving performance.…”
Section: Proposed Methods Language Models For Response Selectionmentioning
confidence: 99%
“…Training Response Selection Models Following several researches based on contextual language models for multi-turn response selection (Whang et al 2020;Lu et al 2020;Gu et al 2020), a pointwise approach is used to learn a cross-encoder that receives both dialog context and response simultaneously. Suppose that a dialog agent is given a dialog dataset…”
Section: Proposed Methods Language Models For Response Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…5 . Token level metrics like ROUGE or BERTScore are not suited to all the language morphologies [36]. We want BEAMetrics to measure the multilingual ability of a metric.…”
Section: Design Principlesmentioning
confidence: 99%