Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) 2017
DOI: 10.18653/v1/s17-2139
|View full text |Cite
|
Sign up to set email alerts
|

SSN_MLRG1 at SemEval-2017 Task 5: Fine-Grained Sentiment Analysis Using Multiple Kernel Gaussian Process Regression Model

Abstract: The system developed by the SSN MLRG1 team for Semeval-2017 task 5 on fine-grained sentiment analysis uses Multiple Kernel Gaussian Process for identifying the optimistic and pessimistic sentiments associated with companies and stocks. Since the comments on the same companies and stocks may display different emotions depending on time, their properities like smoothness and periodicity may vary. Our experiments show that while single Kernel Gaussian Process can learn some properties well, Multiple Kernel Gaussi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…Cosine Similarity Jiang et al [26] 0.778 Ghosal et al [27] 0.751 Deborah et al [63] 0 Our analysis reveals that all BERT-based models surpass the SemEval-2017 top performers, demonstrating the evolving effectiveness of BERT-based architectures in financial text sentiment analysis. Notably, RoBERTa, upon which our SDA is based, shows superior results due to its optimized training strategy and the absence of a next-sentence prediction objective, enhancing its relevance for tasks focused purely on sentiment analysis [61].…”
Section: Modelsmentioning
confidence: 79%
“…Cosine Similarity Jiang et al [26] 0.778 Ghosal et al [27] 0.751 Deborah et al [63] 0 Our analysis reveals that all BERT-based models surpass the SemEval-2017 top performers, demonstrating the evolving effectiveness of BERT-based architectures in financial text sentiment analysis. Notably, RoBERTa, upon which our SDA is based, shows superior results due to its optimized training strategy and the absence of a next-sentence prediction objective, enhancing its relevance for tasks focused purely on sentiment analysis [61].…”
Section: Modelsmentioning
confidence: 79%