2020
DOI: 10.3390/s21010133
|View full text |Cite
|
Sign up to set email alerts
|

An Effective BERT-Based Pipeline for Twitter Sentiment Analysis: A Case Study in Italian

Abstract: Over the last decade industrial and academic communities have increased their focus on sentiment analysis techniques, especially applied to tweets. State-of-the-art results have been recently achieved using language models trained from scratch on corpora made up exclusively of tweets, in order to better handle the Twitter jargon. This work aims to introduce a different approach for Twitter sentiment analysis based on two steps. Firstly, the tweet jargon, including emojis and emoticons, is transformed into plai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 104 publications
(43 citation statements)
references
References 69 publications
0
43
0
Order By: Relevance
“…Pota et al [49] introduced the BERT language model for the task of Twitter sentiment analysis, where they first transformed the Twitter jargon, including emojis and emoticons, into plain text, and then applied BERT, which was pre-trained on plain text, to fine-tune and classify the tweets. Their results show improvements in sentiment classification performance, both with respect to other state-of-the-art systems and with respect to the use of only the BERT classification model.…”
Section: Bert-based Language Modelsmentioning
confidence: 99%
“…Pota et al [49] introduced the BERT language model for the task of Twitter sentiment analysis, where they first transformed the Twitter jargon, including emojis and emoticons, into plain text, and then applied BERT, which was pre-trained on plain text, to fine-tune and classify the tweets. Their results show improvements in sentiment classification performance, both with respect to other state-of-the-art systems and with respect to the use of only the BERT classification model.…”
Section: Bert-based Language Modelsmentioning
confidence: 99%
“…A challenge researchers face is finding the investment on a grand-scale infrastructure to train the models. There are many BERTbased SA experimental research, case studies and review papers presented for Arabic aspect-based [6], Italian Twitter SA [7] and Bangla-English Machine Translation [8].…”
Section: Related Workmentioning
confidence: 99%
“…The authors proposed several NN models including BERT-LSTM on two setups (i.e., 10-class and 3-class, which is the compact version of the dataset), with results showing BERT-LSTM to be the best for the 3-class setup with an average F-score of 82.37%, albeit with a very high training time. Others include the work of [11] who examined several NN models along with BERT for a movie review dataset with results indicating BERT to produce the best accuracy while [13] used BERT for Twitter sentiment analysis, which transformed jargons into plaintext for BERT training. A summary of the studies using deep learning algorithms to predict sentiment analysis based on user reviews is given in Table 1.…”
Section: Deep Learning Approaches To Reviewsmentioning
confidence: 99%
“…Though often performed using machine learning approaches, deep learning has gained momentum in sentiment analysis in recent years showing promising results [6,10]. Further, scholars have also explored various word embedding techniques including the popular Word2Vec and its variants to the more advanced and state-of-art transformer-based pre-trained models such as Bi-directional Encoder Representations from Transformers (BERT) [10][11][12][13] that have displayed much better results in text classifications. Nevertheless, as shown later in Sect.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation