2020
DOI: 10.1007/978-3-030-39442-4_65
|View full text |Cite
|
Sign up to set email alerts
|

Emoji Prediction: A Transfer Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Using the Stanford Sentiment Treebank collection (SST-5) [32] as the training corpus, they showed that SentiBERT outperforms other neural models (Tree-LSTMs, GCN, RNNs) on a Twitter Collection. In [38], they developed an approach similar to SentiBERT, but trained on the collections provided by SemEval-2018 Task 1 organisers, and their approach outperformed existing baselines (such as SeerNet [5], SVM [20], PlusEmo2Vec [25], etc.). These results suggest that the BERT based regression model, SentiBERT, provides a state of the art supervised and transfer learning approach.…”
Section: Supervised Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Using the Stanford Sentiment Treebank collection (SST-5) [32] as the training corpus, they showed that SentiBERT outperforms other neural models (Tree-LSTMs, GCN, RNNs) on a Twitter Collection. In [38], they developed an approach similar to SentiBERT, but trained on the collections provided by SemEval-2018 Task 1 organisers, and their approach outperformed existing baselines (such as SeerNet [5], SVM [20], PlusEmo2Vec [25], etc.). These results suggest that the BERT based regression model, SentiBERT, provides a state of the art supervised and transfer learning approach.…”
Section: Supervised Approachesmentioning
confidence: 99%
“…The second supervised method we employed was the BERTbased method: SentiBERT [37,38] which has been shown to give state of the art performance on one of the test collections (SE18-Vreg). SentiBERT is built on the HuggingFace 6 library, and the model parameters are initialized using pre-trained BERT-base model.…”
Section: Upperbounds: Supervised Approachesmentioning
confidence: 99%