2019
DOI: 10.1017/s1351324919000123
|View full text |Cite
|
Sign up to set email alerts
|

Query-based summarization of discussion threads

Abstract: In this paper, we address query-based summarization of discussion threads. New users can profit from the information shared in the forum, Please check if the inserted city and country names in the affiliations are correct. if they can find back the previously posted information. However, discussion threads on a single topic can easily comprise dozens or hundreds of individual posts. Our aim is to summarize forum threads given real web search queries. We created a data set with search queries from a discussion … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 52 publications
(80 reference statements)
0
11
0
Order By: Relevance
“…In general, it is desired to have a high inter-annotator agreement (IAA), which is usually measured using Cohen's Kappa or Fleiss Kappa co-efficient or KrippendorffâĂŹs alpha. Alternatively, although not popularly, IAA could be measured using Jaccard similarity, or an F1-measure (based on precision and recall between annotators) [160]. Achieving a high-enough inter-annotator agreement is more difficult on some NLG tasks which have room for subjectivity [2].…”
Section: Human Evaluation Setupmentioning
confidence: 99%
“…In general, it is desired to have a high inter-annotator agreement (IAA), which is usually measured using Cohen's Kappa or Fleiss Kappa co-efficient or KrippendorffâĂŹs alpha. Alternatively, although not popularly, IAA could be measured using Jaccard similarity, or an F1-measure (based on precision and recall between annotators) [160]. Achieving a high-enough inter-annotator agreement is more difficult on some NLG tasks which have room for subjectivity [2].…”
Section: Human Evaluation Setupmentioning
confidence: 99%
“…A word embedding is a learned representation for text where words that have the same meaning have a similar representation. This kind of representation has been successful in extractive summarization [32]. WordNet [11] is the most commonly used technique for capturing and processing the semantic meaning of terms; however, it has not been so much when summarizing opinions.…”
Section: Related Workmentioning
confidence: 99%
“…The move from single-document summarisation to multi-document summarisation was largely promoted by the need to deal with the increasing amount of information available online. Initially, the methods were applied to newswire texts, but more recent work focused on summarisation of user-generated content such as discussions on forums (Tigelaar, Op Den Akker, and Hiemstra, 2010;Verberne, Krahmer, Wubben, and van den Bosch, 2019), customer reviews with the output focused on product features to assist customers' decision making (Feiguina and Lapalme, 2007).…”
Section: Single-document Vs Multi-document Summarisationmentioning
confidence: 99%
“…Initially, the methods were applied to newswire texts, but more recent work focused on summarisation of user-generated content such as discussions on forums (Tigelaar, Op Den Akker, and Hiemstra 2010; Verberne et al . 2019) and customer reviews with the output focused on product features to assist customers’ decision making (Feiguina and Lapalme 2007).…”
Section: What Is a Summary?mentioning
confidence: 99%