Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1254
|View full text |Cite
|
Sign up to set email alerts
|

Learning Generic Sentence Representations Using Convolutional Neural Networks

Abstract: We propose a new encoder-decoder approach to learn distributed sentence representations that are applicable to multiple purposes. The model is learned by using a convolutional neural network as an encoder to map an input sentence into a continuous vector, and using a long short-term memory recurrent neural network as a decoder. Several tasks are considered, including sentence reconstruction and future sentence prediction. Further, a hierarchical encoderdecoder model is proposed to encode a sentence to predict … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(61 citation statements)
references
References 43 publications
0
61
0
Order By: Relevance
“…Note that the latest available skip-thoughts implementation returns an error on the IMDB task. 2,4,5,6 (Arora et al, 2018a;Hill et al, 2016;Gan et al, 2017;Logeswaran and Lee, 2018) Best results from publication. Table 4: Performance of document embeddings built usingà la carte n-gram vectors and recent unsupervised word-level approaches on classification tasks, with the character LSTM of (Radford et al, 2017) shown for comparison.…”
Section: N-gram Embeddings For Classificationmentioning
confidence: 99%
“…Note that the latest available skip-thoughts implementation returns an error on the IMDB task. 2,4,5,6 (Arora et al, 2018a;Hill et al, 2016;Gan et al, 2017;Logeswaran and Lee, 2018) Best results from publication. Table 4: Performance of document embeddings built usingà la carte n-gram vectors and recent unsupervised word-level approaches on classification tasks, with the character LSTM of (Radford et al, 2017) shown for comparison.…”
Section: N-gram Embeddings For Classificationmentioning
confidence: 99%
“…Recently, much effort has also been directed towards learning representations for larger pieces of text, with methods ranging from clever compositions of word embeddings (Mitchell and Lapata, 2008;De Boom et al, 2016;Arora et al, 2017;Wieting et al, 2016;Wieting and Gimpel, 2018;Zhelezniak et al, 2019) to sophisticated neural architectures (Le and Mikolov, 2014;Kiros et al, 2015;Conneau et al, 2017;Gan et al, 2017;Tang et al, 2017;Zhelezniak et al, 2018;Subramanian et al, 2018;Pagliardini et al, 2018;Cer et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Unsupervised combined models. The results of the individual models Gan et al, 2017) are not promising. To get better performance, they train two separate models on the same corpus and then combine the latent representations together.…”
Section: Evaluation Resultsmentioning
confidence: 94%