2020
DOI: 10.48550/arxiv.2011.04132
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Automatic Summarization of Open-Domain Podcast Episodes

Abstract: We present implementation details of our abstractive summarizers that achieve competitive results on the Podcast Summarization task of TREC 2020. A concise textual summary that captures important information is crucial for users to decide whether to listen to the podcast. Prior work focuses primarily on learning contextualized representations. Instead, we investigate several less-studied aspects of neural abstractive summarization, including (i) the importance of selecting important segments from transcripts t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…Alternatively, earlier methods show that good content selection helps abstractive news summarization systems (Chen and Bansal, 2018;Gehrmann et al, 2018;Hsu et al, 2018). Hybrid systems that select sentences and generate an abstractive summary have been proposed such as extractive system + TLM for scientific articles (Pilault et al, 2020), simple selection + BART for podcasts (Manakul and Gales, 2020;Song et al, 2020), and guided summarization by BERT-based keyword/sentence extraction + BART for news and scientific articles (He et al, 2020;Dou et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…Alternatively, earlier methods show that good content selection helps abstractive news summarization systems (Chen and Bansal, 2018;Gehrmann et al, 2018;Hsu et al, 2018). Hybrid systems that select sentences and generate an abstractive summary have been proposed such as extractive system + TLM for scientific articles (Pilault et al, 2020), simple selection + BART for podcasts (Manakul and Gales, 2020;Song et al, 2020), and guided summarization by BERT-based keyword/sentence extraction + BART for news and scientific articles (He et al, 2020;Dou et al, 2021).…”
Section: Related Workmentioning
confidence: 99%