2010
DOI: 10.1007/978-3-642-12116-6_61
|View full text |Cite
|
Sign up to set email alerts
|

GEMS: Generative Modeling for Evaluation of Summaries

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…In contrast to the previous methods, the GEMS (Generative Modeling for Evaluation of Summaries) approach of Katragadda (2010) suggests the use of signatures terms to analyze how they are captured in peer summaries. Signature terms (also known as topic signatures) are word vectors related to a particular topic.…”
Section: Summary Contentmentioning
confidence: 99%
“…In contrast to the previous methods, the GEMS (Generative Modeling for Evaluation of Summaries) approach of Katragadda (2010) suggests the use of signatures terms to analyze how they are captured in peer summaries. Signature terms (also known as topic signatures) are word vectors related to a particular topic.…”
Section: Summary Contentmentioning
confidence: 99%
“…Whereas Basic Elements uses Minipar, 18 DEPEVAL(summ) is tested with different parsers, for instance the Charniak parser. 19 GEMS (Generative Modelling for Evaluation of Summaries) (Katragadda 2010) suggests the use of signature terms in order to analyse how they are captured in automatic summaries. The signature terms are calculated on the basis of part-of-speech tags, such as nouns or verbs; query terms and terms of reference summaries.…”
Section: Informativeness Evaluationmentioning
confidence: 99%
“…Researchers have questioned whether ROUGE is really able to capture the informativeness of summaries. Over the years, other automatic metrics have been proposed like Basic Elements (Hovy, Lin, Zhou, and Fukumoto, 2006), AutoSummENG (Giannakopoulos and Karkaletsis, 2009) and GEMS (Generative Modelling for Evaluation of Summaries) (Katragadda, 2010) to name a few, but none managed to demonstrate enough advantages to replace ROUGE as the standard evaluation metric. Lloret and Palomar (2011) discuss in detail these alternative evaluation metrics, whilst Owczarzak, Conroy, Dang, and Nenkova (2012) provide an assessment of evaluation metrics used in multi-document summarisation evaluations.…”
Section: Intrinsic Evaluation Methods In Text Summarisationmentioning
confidence: 99%