Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics 2021
DOI: 10.18653/v1/2021.starsem-1.30
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Training for Machine Reading Comprehension with Virtual Embeddings

Abstract: Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this paper, we aim to apply AT on machine reading comprehension (MRC) tasks. Furthermore, we adapt AT for MRC tasks by proposing a novel adversarial training method called PQAT that perturbs the embedding matrix instead of word vectors. To differentiate the roles of passag… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…They introduced two algorithms called AddSent and AddAny. Later, Yang et al (2021) improved these algorithms by introducing AddSentDivers to increase the diversity of the generated adversarial sentences.…”
Section: Adversarial Sentences In Qa Systemsmentioning
confidence: 99%
“…They introduced two algorithms called AddSent and AddAny. Later, Yang et al (2021) improved these algorithms by introducing AddSentDivers to increase the diversity of the generated adversarial sentences.…”
Section: Adversarial Sentences In Qa Systemsmentioning
confidence: 99%
“…Wang and Jiang [11] combined general knowledge with neural networks through data augmentation. Yang et al [12,13] used adversarial training to maximize the countermeasure loss by adding perturbations in the embedding layer. In addition, some studies have attempted to change the process of model inference.…”
Section: Adversarial Attacks In Machine Reading Comprehension Modelmentioning
confidence: 99%
“…To further illustrate the advantages of our algorithm, we choose the following eleven methods for comparison: QAInfoMax [16], MAARS [9], R.M-Reader [27], KAR [11], BERT+Adv [12], ALUM [13], Sub-part Alignment [14], BERT+DGAdv [8], BERT+PR [15], HKAUP [28], and PQAT [13]. These eleven methods are used to improve the robustness of MRC model.…”
Section: Algorithm Comparisonmentioning
confidence: 99%
“…The performance of models also has been shown the improvement while applying Virtual Adversarial Training (author?) on SQuAD1.1 [12], SQuAD2.0 [13] and RACE [14]. According to the benefits of AT, we decided to apply several training strategies that can boost the model performance across MRC tasks which is discussed further in Section 2.3 and Section 2.4.…”
Section: Introductionmentioning
confidence: 99%