Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1071
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: Neural sequence-to-sequence models are currently the dominant approach in several natural language processing tasks, but require large parallel corpora. We present a sequenceto-sequence-to-sequence autoencoder (SEQ 3), consisting of two chained encoder-decoder pairs, with words used as a sequence of discrete latent variables. We apply the proposed model to unsupervised abstractive sentence compression, where the first and last sequences are the input and reconstructed sentences, respectively, while the middle … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 41 publications
(38 citation statements)
references
References 24 publications
1
37
0
Order By: Relevance
“…Dataset. The same as Baziotis et al (2019), we train our model on the Gigaword corpus (GIGA, Rush et al, 2015). However, we used only 30K sentences randomly picked from sentences with less than 50 words for the training of EALM.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Dataset. The same as Baziotis et al (2019), we train our model on the Gigaword corpus (GIGA, Rush et al, 2015). However, we used only 30K sentences randomly picked from sentences with less than 50 words for the training of EALM.…”
Section: Methodsmentioning
confidence: 99%
“…We followed Baziotis et al (2019) in the evaluation as well, using the test set consisting of the GIGA (1897 sentences) and DUC datasets (DUC3 with 624 sentences, DUC4 with 500 sentences; Over et al 2007).…”
Section: Methodsmentioning
confidence: 99%
“…Recently, Ren et al (2019) used posterior regularization for unsupervised NMT, by employing an SMT model, which is robust to noisy data, as a prior over a neural TM to guide it in the iterative back-translation process. Finally, LMs have been used in a similar fashion as priors over latent text sequences in discrete latent variable models (Miao and Blunsom, 2016;Havrylov and Titov, 2017;Baziotis et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…The model described in Wang and Lee (2018) utilizes seq2seq auto-encoding coupled with adversarial training and reinforcement learning. Fevry and Phang (2018) and Baziotis et al (2019) focus on sentence summarization (i.e., compression). Chu and Liu (2019) proposes yet another denoising auto-encoding based model in multi-document summarization domain.…”
Section: Related Workmentioning
confidence: 99%