2021
DOI: 10.48550/arxiv.2110.12680
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

TODSum: Task-Oriented Dialogue Summarization with State Tracking

Abstract: Previous dialogue summarization datasets mainly focus on open-domain chitchat dialogues, while summarization datasets for the broadly used task-oriented dialogue haven't been explored yet. Automatically summarizing such taskoriented dialogues can help a business collect and review needs to improve the service. Besides, previous datasets pay more attention to generate good summaries with higher ROUGE scores, but they hardly understand the structured information of dialogues and ignore the factuality of summarie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 40 publications
0
3
0
Order By: Relevance
“…Dialogue summarization is the task of generating a concise and fluent summary of a conversation involving two or more participants. It has gained significant attention due to its broad applications and availability of relevant datasets (Gliwa et al, 2019;Zhao et al, 2021). Solutions on dialogue summarization are mainly based on sequence-to-sequence models including the pointer-generation network (See et al, 2017), T5 (Raffel et al, 2020) and BART (Lewis et al, 2020).…”
Section: Dialogue Summarizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Dialogue summarization is the task of generating a concise and fluent summary of a conversation involving two or more participants. It has gained significant attention due to its broad applications and availability of relevant datasets (Gliwa et al, 2019;Zhao et al, 2021). Solutions on dialogue summarization are mainly based on sequence-to-sequence models including the pointer-generation network (See et al, 2017), T5 (Raffel et al, 2020) and BART (Lewis et al, 2020).…”
Section: Dialogue Summarizationmentioning
confidence: 99%
“…For length awareness, each sample is augmented once. This results in a total of (14.7k + 12.5k + We evaluate and benchmark our method on three dialogue summarization datasets including SAM-Sum (Gliwa et al, 2019), DialogSum and TODSum (Zhao et al, 2021). These datasets are equipped with dialogues and human-written or verified summaries.…”
Section: Model Trainingmentioning
confidence: 99%
“…Our dataset focuses on summaries where deliberations are resolved and only task-relevant information (e.g., user intents and slots) is retained. While some datasets aim to summarize task-oriented dialogues from customer services (Zhao et al, 2021;Feigenblat et al, 2021;Lin et al, 2021), they are not designed to summarize chats between users; rather, they summarize a chat between user and agent, which has different dynamics than collaborative decision-making between users.…”
Section: Related Workmentioning
confidence: 99%