2023
DOI: 10.3390/app13063915
|View full text |Cite
|
Sign up to set email alerts
|

RoBERTa-GRU: A Hybrid Deep Learning Model for Enhanced Sentiment Analysis

Abstract: This paper proposes a novel hybrid model for sentiment analysis. The model leverages the strengths of both the Transformer model, represented by the Robustly Optimized BERT Pretraining Approach (RoBERTa), and the Recurrent Neural Network, represented by Gated Recurrent Units (GRU). The RoBERTa model provides the capability to project the texts into a discriminative embedding space through its attention mechanism, while the GRU model captures the long-range dependencies of the embedding and addresses the vanish… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…Among the ML techniques are SVM ( Rahat et al, 2019 ; Steinke et al, 2022 ), DT ( Steinke et al, 2022 ), and KNN ( Rahat et al, 2019 ). LSTM ( Jang et al, 2020 ; Umer et al, 2021 ) CNN-LSTM ( Jang et al, 2020 ; Umer et al, 2021 ), RoBERTa-LSTM ( Tan et al, 2022 ), and RoBERTa-GRU ( Tan et al, 2023 ) are a few examples of deep learning techniques. Table 4 compares all approaches with the suggested RoBERTa-(CNN-LSTM) model using the Twitter dataset.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Among the ML techniques are SVM ( Rahat et al, 2019 ; Steinke et al, 2022 ), DT ( Steinke et al, 2022 ), and KNN ( Rahat et al, 2019 ). LSTM ( Jang et al, 2020 ; Umer et al, 2021 ) CNN-LSTM ( Jang et al, 2020 ; Umer et al, 2021 ), RoBERTa-LSTM ( Tan et al, 2022 ), and RoBERTa-GRU ( Tan et al, 2023 ) are a few examples of deep learning techniques. Table 4 compares all approaches with the suggested RoBERTa-(CNN-LSTM) model using the Twitter dataset.…”
Section: Resultsmentioning
confidence: 99%
“…Among the ML techniques are SVM (Rahat et al, 2019;Steinke et al, 2022), DT (Steinke et al, 2022), and KNN (Rahat et al, 2019). LSTM (Jang et al, 2020;Umer et al, 2021) CNN-LSTM (Jang et al, 2020;Umer et al, 2021), RoBERTa-LSTM (Tan et al, 2022), and RoBERTa-GRU (Tan et al, 2023) On the IMDB reviews dataset, the suggested model and other techniques are contrasted in Table 5. The values of F1-measure, precision, accuracy, and recall are the highest for our proposed model.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Contemporary research is gravitating towards hybrid models that combine deep learning's text analytic depth with machine learning's efficiency in numerical data interpretation, aiming for superior classification accuracy [6], [7]. Our study contributes to this trajectory by proposing a multi-class detection framework that unites BERT and Roberta's contextually aware models [8], [9], [10] with BiLSTM's sequential data handling [11], [12] and LightGBM's ensemble strengths. A thorough literature review was performed, and several other classification studies [13], [14], [15] were analyzed.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The dataset was then split into training, validation, and testing sets. The preprocessed dataset was then formatted into a compatible format to be fed to the RoBERTa model which was done by concatenating and adding special tokens [30]. The typical format for RoBERTa input is set to be as: single sequence: <s> X </s> pair of sequences: <s> A </s></s> B </s>.…”
Section: Data Preparationmentioning
confidence: 99%