2019
DOI: 10.48550/arxiv.1909.13375
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Simple and Effective Model for Answering Multi-span Questions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…In the question answering task, the question does not require labels. (Segal et al, 2019) also considers the multi-answer question problem as a sequence tagging problem, and only the answer is tagged with labels. Huggingface 10 mentions, if a token with label -100, it will not be considered in entropy loss calculation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In the question answering task, the question does not require labels. (Segal et al, 2019) also considers the multi-answer question problem as a sequence tagging problem, and only the answer is tagged with labels. Huggingface 10 mentions, if a token with label -100, it will not be considered in entropy loss calculation.…”
Section: Related Workmentioning
confidence: 99%
“…Huggingface 10 mentions, if a token with label -100, it will not be considered in entropy loss calculation. Therefore, the experiment uses -100 to label sentence 1 and uses IO tagging, which performs better than BIO in multi-answer questioning (Segal et al, 2019) to label sentence 2.…”
Section: Related Workmentioning
confidence: 99%
“…For extractive reading comprehension such as SQuAD (Rajpurkar et al, 2016), the answer is a span in the text, and the MRC model ) gets the answer by predicting the probability that the word is start or end. Some datasets such as DROP (Dua et al, 2019) have answers that include multiple spans, and the answers can be obtained by using BIO tagging (Segal et al, 2019). For multiple-choice reading comprehension where the answer is one of several options, a method (Pan et al, 2019) is to calculate the score for each option and then select the option with the highest score.…”
Section: Machine Reading Comprehensionmentioning
confidence: 99%
“…Transformer based architectures have also produced state-of-the-art performance on sequence tagging tasks like Named Entity Recognition (NER) (Yamada et al, 2020;Devlin et al, 2019; span extraction (Eberts and Ulges, 2019;Joshi et al, 2020) and QA tasks (Devlin et al, 2019;Lan et al, 2020). Multiple span extraction from texts has been explored both as a sequence tagging task (Patil et al, 2020;Segal et al, 2019) and as span extraction as in RC tasks (Hu et al, 2019;.…”
Section: Literaturementioning
confidence: 99%