Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.109
|View full text |Cite
|
Sign up to set email alerts
|

Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs

Abstract: ive conversation summarization has received much attention recently. However, these generated summaries often suffer from insufficient, redundant, or incorrect content, largely due to the unstructured and complex characteristics of human-human interactions. To this end, we propose to explicitly model the rich structures in conversations for more precise and accurate conversation summarization, by first incorporating discourse relations between utterances and action triples ("WHO-DOING-WHAT") in utterances thro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(25 citation statements)
references
References 42 publications
0
25
0
Order By: Relevance
“…Dialogue Summarization Dialogue summarization aims to generate concise summaries for dialogues, such as meetings (McCowan et al, 2005;Janin et al, 2003;Zhong et al, 2021;Shang et al, 2018;Zhu et al, 2020a), TV series , interviews (Zhu et al, 2021), and chitchat (Gliwa et al, 2019;Zhao et al, 2020;Chen and Yang, 2021). Some summarization datasets (not limited to dialogues) contain queries asking for summarizing specific parts of dialogues (Zhong et al, 2021;Nema et al, 2017), while others only need to summarize whole dialogues Gliwa et al, 2019;Hermann et al, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…Dialogue Summarization Dialogue summarization aims to generate concise summaries for dialogues, such as meetings (McCowan et al, 2005;Janin et al, 2003;Zhong et al, 2021;Shang et al, 2018;Zhu et al, 2020a), TV series , interviews (Zhu et al, 2021), and chitchat (Gliwa et al, 2019;Zhao et al, 2020;Chen and Yang, 2021). Some summarization datasets (not limited to dialogues) contain queries asking for summarizing specific parts of dialogues (Zhong et al, 2021;Nema et al, 2017), while others only need to summarize whole dialogues Gliwa et al, 2019;Hermann et al, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…In order to gain deeper insights into the types of factuality errors introduced by different abstractive dialogue summarization systems, we proposed a new taxonomy of factuality errors for abstractive dialogue summarization based on our empirical experiments and annotations of the performance of a set of representative baseline summarization models on the SAMSum dataset, which is a widelyused large-scale dialogue summarization dataset of chat message dialogues in English (see Section 4.1). Specifically, we generate summaries of SAM-Sum dialogues using state-of-the-art abstractive dialogue summarization models, including models fine-tuned based on T5 (Raffel et al, 2020), Pegasus (Zhang et al, 2020), BART , D-HGN (Xiachong et al, 2021), and S-BART (Chen and Yang, 2021b). We then man-ually annotate all different types of errors in these generated summaries that are inconsistent with the source dialogue, compute detailed statistics of all these factuality errors, and then classify them into different categories.…”
Section: Generated Summarymentioning
confidence: 99%
“…We performed a human evaluation of four model outputs from 19 SAMSum dialogues in order to identify the limitations of abstractive summarization models in dialogue summarization tasks. The four models used in this human evaluation are two BART models with different random seeds (ROUGE-L 48 and 49) , D-HGN (ROUGE-L 40) (Xiachong et al, 2021), and S-BART (ROUGE-L 48 (Chen and Yang, 2021b)). BART and S-BART are pre-trained models, and D-HGN is trained from scratch.…”
Section: Annotation and Analysismentioning
confidence: 99%
“…Gup-Shup (Mehnaz et al, 2021) develops the first code-switched dialogue summarization dataset whose dialogues are in Hindi-English 2 . Based on these datasets, a lot of work (Chen and Yang, 2020;Wu et al, 2021;Xiachong et al, 2021;Chen and Yang, 2021;Feng et al, 2021b) model the conversation characteristics and achieve great performance. All these efforts are given to monolingual or codeswitched scenarios while we focus on cross-lingual scenario in this paper.…”
Section: Introductionmentioning
confidence: 99%