2021
DOI: 10.1109/access.2021.3118537
|View full text |Cite
|
Sign up to set email alerts
|

Sentiment Analysis of Review Text Based on BiGRU-Attention and Hybrid CNN

Abstract: Convolutional neural networks (CNN), recurrent neural networks (RNN), attention, and their variants are extensively applied in the sentiment analysis, and the effect of fusion model is expected to be better. However, fusion model is confronted with some problems such as complicated structure, excessive trainable parameters, and long training time. The classification effect of traditional model with cross entropy loss as loss function is undesirable since sample category imbalance as well as ease and difficulty… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 33 publications
0
6
0
Order By: Relevance
“…They experimented with both randomized and pre-trained word vectors for generating word vectors in embedding layer. Zhu et al [30] used a similar architecture based on attention mechanism word embeddings which were passed through a BiGRU layer for capturing long-distance contextual semantics. Self-attention is then applied on BiGRU layer's output to compute word similarity across sentence subsequently emphasizing word with strong emotions.…”
Section: Related Workmentioning
confidence: 99%
“…They experimented with both randomized and pre-trained word vectors for generating word vectors in embedding layer. Zhu et al [30] used a similar architecture based on attention mechanism word embeddings which were passed through a BiGRU layer for capturing long-distance contextual semantics. Self-attention is then applied on BiGRU layer's output to compute word similarity across sentence subsequently emphasizing word with strong emotions.…”
Section: Related Workmentioning
confidence: 99%
“…(6) BiGRU-Att-HCNN [19]: using an attention mechanism to fuse the information extracted by BiGRU and HCNN models to enrich the semantic information and feature information of the sentence.…”
Section: Benchmark Experimentsmentioning
confidence: 99%
“…When the convolution kernel window size is set to 5, the prediction accuracy is the highest. Figure 2(b) shows the prediction accuracy when using different combinations of convolution kernels of different sizes, and it can be seen that when the combination of convolution kernel window sizes is (3,4,5), the capsule network obtains higher prediction accuracy.…”
Section: Ablation Experiments and Analysismentioning
confidence: 99%
“…Yang et al [3] combined the BERT model with the capsule network and proposed an enhanced capsule network to accurately feedback the real word-of-mouth of the movie through the user's comments in the sentiment analysis based on social media comments. The BiGRU model is composed of two GRU models in opposite directions superimposed up and down [4], and the single GRU model can only obtain the one-way above or below information of the text [5]. In this paper, BiGRU is used instead of GRU to better capture the bidirectional semantic dependence of the text.…”
mentioning
confidence: 99%