2020 25th International Conference on Pattern Recognition (ICPR) 2021
DOI: 10.1109/icpr48806.2021.9412167
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Training for Aspect-Based Sentiment Analysis with BERT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 91 publications
(44 citation statements)
references
References 19 publications
0
44
0
Order By: Relevance
“…A core sub-task in this approach is Aspect-Based Sentiment Analysis: identification of aspect mentions in the text, which may be further classified into highlevel aspect categories, and classification of the sentiment towards these mentions. Recent examples are (Ma et al, 2019;Miao et al, 2020;Karimi et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…A core sub-task in this approach is Aspect-Based Sentiment Analysis: identification of aspect mentions in the text, which may be further classified into highlevel aspect categories, and classification of the sentiment towards these mentions. Recent examples are (Ma et al, 2019;Miao et al, 2020;Karimi et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Especially, the ADAN was trained with labelled source text data from English and unlabeled target text data from Arabic and Chinese. More recently, Karimi et al [160] fine-tuned the general-purpose BERT and domain-specific post-trained BERT using adversarial training, which showed promising results in TSA.…”
Section: 12mentioning
confidence: 99%
“…They used AMDA to defend against attacks from PPWS [164] and TextFooler [139] on the data sets SST-2, AG News and IMBD, and achieved significant robustness gains in both Targeted Attack Evaluation (TAE) and Static Attack Evaluation (SAE). For large pre-training model BERT, Karimi et al [171] introduced a method named BAT to fine-tuned the BERT model by using normal and adversarial text at the same time to obtain a model with better robustness and generalization ability. The experiment indicated that the BERT model trained with BAT was more robust than the traditional BERT model in aspect-based sentiment analysis task.…”
Section: Model Robustness Enhancementmentioning
confidence: 99%