2020
DOI: 10.48550/arxiv.2009.03673
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

kk2018 at SemEval-2020 Task 9: Adversarial Training for Code-Mixing Sentiment Classification

Abstract: Code switching is a linguistic phenomenon that may occur within a multilingual setting where speakers share more than one language. With the increasing communication between groups with different languages, this phenomenon is more and more popular. However, there are little research and data in this area, especially in code-mixing sentiment classification. In this work, the domain transfer learning from state-of-the-art uni-language model ERNIE is tested on the code-mixing dataset, and surprisingly, a strong b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…A novel and interesting approach, BERT-Attack, was also proposed [6] where the pre-trained BERT model was used to effectively attack fine-tuned BERT models, as well as traditional LSTM based deep learning models. Liu et al [7] have applied the concept of transfer learning using the ERNIE framework, along with adversarial training using a multilingual model, to their work. Tan et al [20] have proposed two strong black-box adversarial attack frameworks-one word-level and another phrase-level, the latter one being being particularly effective on XNLI.…”
Section: Related Workmentioning
confidence: 99%
“…A novel and interesting approach, BERT-Attack, was also proposed [6] where the pre-trained BERT model was used to effectively attack fine-tuned BERT models, as well as traditional LSTM based deep learning models. Liu et al [7] have applied the concept of transfer learning using the ERNIE framework, along with adversarial training using a multilingual model, to their work. Tan et al [20] have proposed two strong black-box adversarial attack frameworks-one word-level and another phrase-level, the latter one being being particularly effective on XNLI.…”
Section: Related Workmentioning
confidence: 99%