Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.336
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization

Abstract: Text summarization is one of the most challenging and interesting problems in NLP. Although much attention has been paid to summarizing structured text like news reports or encyclopedia articles, summarizing conversations-an essential part of humanhuman/machine interaction where most important pieces of information are scattered across various utterances of different speakersremains relatively under-investigated. This work proposes a multi-view sequence-tosequence model by first extracting conversational struc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
74
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 101 publications
(97 citation statements)
references
References 25 publications
1
74
0
Order By: Relevance
“…Such methods struggled with generating succinct, fluent, and natural summaries, especially when the key information needs to be aggregated from multiple first-person point-of-view utterances (Song et al, 2020). Abstractive conversation summarization overcomes these issues by designing hierarchical models , incorporating commonsense knowledge (Feng et al, 2020), or leveraging conversational structures like dialogue acts (Goo and Chen, 2018), key point sequences (Liu et al, 2019a), topic segments (Liu et al, 2019b; and stage developments (Chen and Yang, 2020). Some recent research has also utilized discourse relations as input features in classifiers to detect important content in conversations (Murray et al, 2006;Bui et al, 2009;Qin et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Such methods struggled with generating succinct, fluent, and natural summaries, especially when the key information needs to be aggregated from multiple first-person point-of-view utterances (Song et al, 2020). Abstractive conversation summarization overcomes these issues by designing hierarchical models , incorporating commonsense knowledge (Feng et al, 2020), or leveraging conversational structures like dialogue acts (Goo and Chen, 2018), key point sequences (Liu et al, 2019a), topic segments (Liu et al, 2019b; and stage developments (Chen and Yang, 2020). Some recent research has also utilized discourse relations as input features in classifiers to detect important content in conversations (Murray et al, 2006;Bui et al, 2009;Qin et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Discourse Relation Graph Utterances from different speakers do not occur in isolation; instead, they are related within the context of discourse (Murray et al, 2006;Qin et al, 2017), which has been shown effective for dialogue understanding like identifying the decisions in multi-party dialogues (Bui et al, 2009) and detecting salient content in email conversations (McKeown et al, 2007). Although current attention-based neural models are supposed to, or might implicitly, learn certain relations between utterances, they often struggle to focus on many informative utterances (Chen and Yang, 2020;Song et al, 2020) and fail to address long-range dependencies (Xu et al, 2020), especially when there are frequent interruptions. As a result, explicitly incorporating the discourse relations will help neural summarization models better encode the unstructured conversations and concentrate on the most salient utterances to generate more informative and less redundant summaries.…”
Section: Structured Graph Constructionmentioning
confidence: 99%
See 2 more Smart Citations
“…Dialogue summaries are useful for participants to recap salient information in the talk and for absentees to grasp the key points. As a result, several models have been recently proposed to summarize daily conversations (Gliwa et al, 2019;Chen and Yang, 2020), meeting transcripts (Zhu et al, 2020) and customer support conversations .…”
Section: Introductionmentioning
confidence: 99%