2020
DOI: 10.1007/978-3-030-46133-1_22
|View full text |Cite
|
Sign up to set email alerts
|

Augmenting Semantic Representation of Depressive Language: From Forums to Microblogs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 31 publications
0
7
0
Order By: Relevance
“…While the authors have used the CNN model to learn high-quality features, their method does not consider temporal dynamics coupled with latent topics, which we show to play a crucial role in overall quantitative performance. Farruque et al, [16] study the problem of creating word embeddings in cases where the data is scarce, for instance, depressive language detection from user tweets. The underlying motivation of their work is to simulate a retrofitting-based word embedding approach [17] where they begin with a pre-trained model and fine-tune the model on domain-specific data.…”
Section: User-level Behavioursmentioning
confidence: 99%
“…While the authors have used the CNN model to learn high-quality features, their method does not consider temporal dynamics coupled with latent topics, which we show to play a crucial role in overall quantitative performance. Farruque et al, [16] study the problem of creating word embeddings in cases where the data is scarce, for instance, depressive language detection from user tweets. The underlying motivation of their work is to simulate a retrofitting-based word embedding approach [17] where they begin with a pre-trained model and fine-tune the model on domain-specific data.…”
Section: User-level Behavioursmentioning
confidence: 99%
“…Word-Embedding-Family(WEF) We use several classic word embedding models, including Google News (Google) 4 , Twitter Glove (Glove) 5 , Twitter Skip-gram Embedding (TE) [10], Depression Specific Embedding (DSE) trained on Depression specific corpora [9], Depression Embedding Augmented Twitter Embedding (ATE) [9], NLI pre-trained Roberta Embedding (Roberta-NLI) [15] and Universal Sentence Encoder Embedding (USE) [4]. All these embeddings except DSE have been trained on millions of tokens.…”
Section: Representation Of Tweets and Labels For Zslmentioning
confidence: 99%
“…Word Vector Mapper Models(WV-MAPPER) As originally proposed in [9], we learn a least square projection matrix, M w , between the word vectors of the common vocabulary V of both source and target embeddings. This learned matrix is then used to adjust word vectors of source embedding, then later used to build WV-AVG sentence representation as outlined in Equation 1.…”
Section: Representation Of Tweets and Labels For Zslmentioning
confidence: 99%
See 1 more Smart Citation
“…Farruque et al, [13] study the problem of creating word embeddings in cases where the data is scarce, for instance, depressive language detection from user tweets. The underlying motivation of their work is to simulate a retrofitting-based word embedding approach [14] where they begin with a pre-trained model and fine-tune the model on domain-specific data.…”
Section: Word Embeddings For Depression Detectionmentioning
confidence: 99%