2018
DOI: 10.1007/978-3-319-99722-3_39
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised Sentiment Annotation of Large Corpora

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…After, we applied fine-tuning in the SA, AE, HS, and ID tasks. And we test in datasets TweetSentBR (Brum and das Grac ¸as Volpe Nunes, 2018), ABSAPT 2022(da Silva et al, 2022, ToldBR (Leite et al, 2020), and IDPT 2021 (Corrêa et al, 2021). Finally, we analyzed the results obtained in each task.…”
Section: Methodsmentioning
confidence: 99%
“…After, we applied fine-tuning in the SA, AE, HS, and ID tasks. And we test in datasets TweetSentBR (Brum and das Grac ¸as Volpe Nunes, 2018), ABSAPT 2022(da Silva et al, 2022, ToldBR (Leite et al, 2020), and IDPT 2021 (Corrêa et al, 2021). Finally, we analyzed the results obtained in each task.…”
Section: Methodsmentioning
confidence: 99%
“…The semi-supervised annotation approach has been applied for other languages, such as the Brazilian Portuguese language. In [23], the authors extended a small sentiment corpus, which was annotated manually, to annotate a large unlabeled corpus. They used only one classifier to predict the classes of the unlabeled documents and added those documents which have a confidence value above a predefined threshold.…”
Section: Sentiment Annotationmentioning
confidence: 99%
“…The best results were obtained with the Naïve Bayes classifier, so we consider these to be the baseline results. Brum and Nunes [9], in a subsequent work, used Naïve Bayes, SVM, Logistic Regression (LR), Multi-Layer Perceptron (MLP), Decision Trees, and Random Forest classifiers. In this case, Multi-Layer Perceptron was the bestperforming classifier, as listed in the table.…”
Section: Fine-tuning Resultsmentioning
confidence: 99%