2021
DOI: 10.1007/s11227-021-03652-4
|View full text |Cite
|
Sign up to set email alerts
|

Entity-level sentiment prediction in Danmaku video interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…Deep learning-based methods have stronger feature learning capabilities, reducing the cost of building and selecting features [37], but the method needs to be based on a large amount of data, and is prone to data sparsity and overfitting problems in the case of small datasets [38]; Mainstream text pre-training models use different segmentation methods, BERT and XLNet use WordPiece, RoBERTa uses Byte-Pair Encoding, and the use of Chinese segmentation tools can easily lead to the fact that there is no difference between the Chinese segmented corpus and the pre-segmented one after tokenizer processing, which loses semantic information and results in model performance degradation. RQ4: How to effectively extract the word information after Chinese word segmentation when the Chinese word segmentation method is inconsistent with the tokenizer method when using the text pre-training model?…”
Section: Deep Learning-based Danmaku Sentiment Analysismentioning
confidence: 99%
“…Deep learning-based methods have stronger feature learning capabilities, reducing the cost of building and selecting features [37], but the method needs to be based on a large amount of data, and is prone to data sparsity and overfitting problems in the case of small datasets [38]; Mainstream text pre-training models use different segmentation methods, BERT and XLNet use WordPiece, RoBERTa uses Byte-Pair Encoding, and the use of Chinese segmentation tools can easily lead to the fact that there is no difference between the Chinese segmented corpus and the pre-segmented one after tokenizer processing, which loses semantic information and results in model performance degradation. RQ4: How to effectively extract the word information after Chinese word segmentation when the Chinese word segmentation method is inconsistent with the tokenizer method when using the text pre-training model?…”
Section: Deep Learning-based Danmaku Sentiment Analysismentioning
confidence: 99%
“…Deep learning-based methods have stronger feature learning capabilities, reducing the cost of building and selecting features 36 , but the method needs to be based on a large amount of data, and is prone to data sparsity and overfitting problems in the case of small datasets 37 ; Mainstream text pre-training models use different segmentation methods, BERT and XLNet use WordPiece, RoBERTa uses Byte-Pair Encoding, and the use of Chinese segmentation tools can easily lead to the fact that there is no difference between the Chinese segmented corpus and the pre-segmented one after tokenizer processing, which loses semantic information and results in model performance degradation.…”
Section: Related Workmentioning
confidence: 99%
“…Wang et al proposed an improved Bi-LSTM based model to identify the emotions of danmaku messages [11]. Besides, a framework of entity-level sentiment analysis on danmaku video comments was proposed in [12].…”
Section: B Sentiment Analysismentioning
confidence: 99%