2021
DOI: 10.48550/arxiv.2104.08735
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning with Instance Bundles for Reading Comprehension

Abstract: When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other. However, closely related questions and their corresponding answers are not independent, and leveraging these relationships could provide a strong supervision signal to a model. Drawing on ideas from contrastive estimation, we introduce several new supervision techniques that compare question-answer scores across multiple related instances. Specifically, we normal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Last, we showed that constraint sets are useful for evaluation. Future work can use constraints as a supervision signal, similar to Dua et al (2021), who leveraged dependencies between training examples to enhance model performance.…”
Section: Discussionmentioning
confidence: 99%
“…Last, we showed that constraint sets are useful for evaluation. Future work can use constraints as a supervision signal, similar to Dua et al (2021), who leveraged dependencies between training examples to enhance model performance.…”
Section: Discussionmentioning
confidence: 99%
“…Aforementioned work built dataset baselines with popular entity-based PLMs, and thus leave significant performance gaps compared with human evaluation. Asai and Hajishirzi (2020), Dua et al (2021) and Shang et al (2021) leverage features of closely related questions to capture temporal difference to deal with certain types of event-centric questions. Compared to the existing works, we target to various types of event-centric questions.…”
Section: Related Workmentioning
confidence: 99%
“…This has led to an increased interest in creating automatic counterfactual data for evaluating outof-distribution generalization (Bowman and Dahl, 2021) and for counterfactual data augmentation (Geva et al, 2021;Longpre et al, 2021). Some work focuses on using heuristics like first-orderlogic (Asai and Hajishirzi, 2020), swapping superlatives and nouns (Dua et al, 2021), or targeting specific data splits (Finegan-Dollak and Verma, 2020). Webster et al (2020) use templates to create large-scale counterfactual data for pre-training to reduce gender bias.…”
Section: Counterfactual Generationmentioning
confidence: 99%