2018
DOI: 10.13053/cys-22-3-3027
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Sentence Embeddings for Answer Summarization in Non-factoid CQA

Abstract: This paper presents a method for summarizing answers in Community Question Answering. We explore deep Auto-encoder and Long-short-termmemory Auto-encoder for sentence representation. The sentence representations are used to measure similarity in Maximal Marginal Relevance algorithm for extractive summarization. Experimental results on a benchmark dataset show that our unsupervised method achieves state-of-the-art performance while requiring no annotated data.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…For unsupervised learning methods , LexRank [ 59 ] is a textual PageRank-like algorithm that selects the most salient sentences from a reference. Using embedding similarity for sorting, W2VLSTM [ 60 ] is an improvement based on LexRank. With the development of a deep neural generation network, Seq3 [ 4 ] is proposed to use the “long-short-long” pattern to automatically generate a summary.…”
Section: Methodsmentioning
confidence: 99%
“…For unsupervised learning methods , LexRank [ 59 ] is a textual PageRank-like algorithm that selects the most salient sentences from a reference. Using embedding similarity for sorting, W2VLSTM [ 60 ] is an improvement based on LexRank. With the development of a deep neural generation network, Seq3 [ 4 ] is proposed to use the “long-short-long” pattern to automatically generate a summary.…”
Section: Methodsmentioning
confidence: 99%
“…Many researchers have implemented zero-shot learning for image processing applications (Xie et al, 2019;Fu et al, 2018b;Liu et al, 2018;Xiong et al, 2016;Gavves et al, 2015) and only a few works used it for text processing (Artetxe & Schwenk, 2019;Zhang et al, 2019;Fu et al, 2018a;Yazdani & Henderson, 2015). This paper focuses on the implementation of zero-shot learning for text processing, especially for nonfactoid question answering and then summarizes the appropriate answers using the summarization techniques adapted in (Ha et al, 2018;Cao et al, 2017). This model could be incorporated into teaching and learning platforms such as Massive Open Online Courses (MOOCs).…”
Section: Introductionmentioning
confidence: 99%