2002
DOI: 10.1017/s1351324901002741
|View full text |Cite
|
Sign up to set email alerts
|

SUMMAC: a text summarization evaluation

Abstract: The TIPSTER Text Summarization Evaluation (SUMMAC) has developed several new extrinsic and intrinsic methods for evaluating summaries. It has established definitively that automatic text summarization is very effective in relevance assessment tasks on news articles. Summaries as short as 17% of full text length sped up decision-making by almost a factor of 2 with no statistically significant degradation in accuracy. Analysis of feedback forms filled in after each decision indicated that the intelligibility of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
76
1
5

Year Published

2007
2007
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 118 publications
(84 citation statements)
references
References 39 publications
2
76
1
5
Order By: Relevance
“…However, quantifying performance in terms of decision-making accuracy on compressed titles (as compared to the full titles) is informative because it illustrates how summarization techniques assist the user's end task, that of knowledge exploration and information gathering. In general, we believe that information presentation issues provide a general framework for task-based evaluation of summarization systems; see also, (Mani et al 2002;Dorr et al 2005) for similar setups.…”
Section: Grounding Summarization In Real-world Tasksmentioning
confidence: 88%
See 2 more Smart Citations
“…However, quantifying performance in terms of decision-making accuracy on compressed titles (as compared to the full titles) is informative because it illustrates how summarization techniques assist the user's end task, that of knowledge exploration and information gathering. In general, we believe that information presentation issues provide a general framework for task-based evaluation of summarization systems; see also, (Mani et al 2002;Dorr et al 2005) for similar setups.…”
Section: Grounding Summarization In Real-world Tasksmentioning
confidence: 88%
“…Developing realistic usage scenarios is challenging, but often the ''goodness'' of a summary can only be meaningfully operationalized in its ''usefulness'' for a particular task. One might, for example, measure how summaries impact question answering (Morris et al 1992;Mani et al 2002) or relevance judgments (Dorr et al 2005). One possible hypothesis is that summaries allow users to make quicker decisions (since they have to read less), without compromising the quality of those decisions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, we also evaluated the quality of the extracted scenes as perceived by humans, which is necessary, given the approximate nature of our gold standard. We adopted a question-answering (Q&A) evaluation paradigm which has been used previously to evaluate summaries and document compression (Morris et al, 1992;Mani et al, 2002;Clarke and Lapata, 2010). Under the assumption that the summary is to function as a replacement for the full script, we can measure the extent to which it can be used to find answers to questions which have been derived from the entire script and are representative of its core content.…”
Section: Methodsmentioning
confidence: 99%
“…The abstract may provide an introductory overview of its topic or argument for readers to whom the document is of marginal interest, and make a reading of the full document unnecessary" [2]. Also summaries as short as 17% of full text length sped up decision-making by almost a factor of 2 with no statistically significant degradation in accuracy [3].…”
Section: Definitions Of Summary Abstract and Automatic Text Summarizmentioning
confidence: 99%