2017
DOI: 10.1109/taslp.2017.2658019
|View full text |Cite
|
Sign up to set email alerts
|

Improving Word Representations with Document Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 19 publications
0
10
0
Order By: Relevance
“…By simply extending the Word2Vec structures, their objective function included the loss corresponding to global context. In addition, some fine-tuned models are proposed, they have integrated some other information, such as sentimental information [39], character information [22,39], document labels [40,41], syntactic information [42], on the basis of their original pre-trained vectors. In recent years, some new models have also been proposed, such as ELMo (Embeddings from Language Models) [43] and BERT (Bidirectional Encoder Representations from Transformer) [44].…”
Section: Word Embedding Modelsmentioning
confidence: 99%
“…By simply extending the Word2Vec structures, their objective function included the loss corresponding to global context. In addition, some fine-tuned models are proposed, they have integrated some other information, such as sentimental information [39], character information [22,39], document labels [40,41], syntactic information [42], on the basis of their original pre-trained vectors. In recent years, some new models have also been proposed, such as ELMo (Embeddings from Language Models) [43] and BERT (Bidirectional Encoder Representations from Transformer) [44].…”
Section: Word Embedding Modelsmentioning
confidence: 99%
“…These models focus on Twitter sentiment classification and predict or rank sentiment polarity based on word embeddings in a fixed window of words across a sentence. Based on the Skip-Gram model, Zhang et al(2015) proposed a model for word-level and sentencelevel sentiment analysis, and Yang et al(2017) proposed a model that predicted the target word and its label simultaneously. Both of them took sentiment information as a part of the local context.…”
Section: Sentiment-specific Word Embeddingmentioning
confidence: 99%
“…These models predict or rank sentiment polarity based on word embeddings in a fixed window of words across a sentence. In addition, based on the Skip-Gram (Mikolov et al 2013), Zhang et al(2015) integrated the sentiment information by using the semantic word embeddings in the context to predict the sentiment polarity through a softmax layer, and Yang et al(2017) proposed a model that predicted the target word and its label simultaneously. Both of them took sentiment information as a part of the local context.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, learning distributed representation of words, phrases, and sentences has gained a lot of a ention due to its applicability and superior performance over bag-of-words (BOW) features in a wide range of text processing tasks [6,14,25,[35][36][37]. ese models can be categorized into two groups: (i) task-agnostic or unsupervised models, and (ii) task-speci c or supervised models.…”
Section: Related Workmentioning
confidence: 99%