2022
DOI: 10.48550/arxiv.2206.14729
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks

Abstract: Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability. Here, we describe the approach of the team "longhorns" on Task 1 of the The First Workshop on Dynamic Adversarial Data Collection (DADC), which asked teams to manually fool a model on an Extractive Question Answering task. Our team finished first, with a model error rate of 62%. 1 We advocate for a systematic, linguistically informed approach to formulating adversarial ques… Show more

Help me understand this report

This publication either has no citations yet, or we are still processing them

Set email alert for when this publication receives citations?

See others like this or search for similar articles