Interspeech 2008 2008
DOI: 10.21437/interspeech.2008-604
|View full text |Cite
|
Sign up to set email alerts
|

Packing the meeting summarization knapsack

Abstract: Despite considerable work in automatic meeting summarization over the last few years, comparing results remains difficult due to varied task conditions and evaluations. To address this issue, we present a method for determining the best possible extractive summary given an evaluation metric like ROUGE. Our oracle system is based on a knapsack-packing framework, and though NP-Hard, can be solved nearly optimally by a genetic algorithm. To frame new research results in a meaningful context, we suggest presenting… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(14 citation statements)
references
References 15 publications
(9 reference statements)
0
14
0
Order By: Relevance
“…We conducted experiments on the widely-used AMI (McCowan et al, 2005) and ICSI (Janin et al, 2003) benchmark datasets. We used the traditional test sets of 20 and 6 meetings respectively for the AMI and ICSI corpora (Riedhammer et al, 2008). Each meeting in the AMI test set is associated with a human abstractive summary of 290 words on average, whereas each meeting in the ICSI test set is associated with 3 human abstractive summaries of respective average sizes 220, 220 and 670 words.…”
Section: Datasetsmentioning
confidence: 99%
See 3 more Smart Citations
“…We conducted experiments on the widely-used AMI (McCowan et al, 2005) and ICSI (Janin et al, 2003) benchmark datasets. We used the traditional test sets of 20 and 6 meetings respectively for the AMI and ICSI corpora (Riedhammer et al, 2008). Each meeting in the AMI test set is associated with a human abstractive summary of 290 words on average, whereas each meeting in the ICSI test set is associated with 3 human abstractive summaries of respective average sizes 220, 220 and 670 words.…”
Section: Datasetsmentioning
confidence: 99%
“…• Random and Longest Greedy are basic baselines recommended by (Riedhammer et al, 2008) • Oracle is the same as the random baseline, but uses the human extractive summaries as input.…”
Section: Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…To describe the difference between the ROUGE n scores of oracle and system summaries in multiple document summarization tasks, Riedhammer et al (2008) proposed an approximate algorithm with a genetic algorithm (GA) to find oracle summaries. Moen et al (2014) utilized a greedy algorithm for the same purpose.…”
Section: Definition Of Extractive Oracle Summariesmentioning
confidence: 99%