2016
DOI: 10.48550/arxiv.1607.06275
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
35
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 38 publications
(36 citation statements)
references
References 11 publications
1
35
0
Order By: Relevance
“…The answer extraction task is intended to extract the answer word from the answer sentence (RQ3). These tasks are crucial in the study of machine reading comprehension [31,51,55]. Note that we aim to demonstrate the effectiveness and interpretability of EEG signals as implicit feedback.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…The answer extraction task is intended to extract the answer word from the answer sentence (RQ3). These tasks are crucial in the study of machine reading comprehension [31,51,55]. Note that we aim to demonstrate the effectiveness and interpretability of EEG signals as implicit feedback.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…Closed-book question answering (QA) tasks, including WebQA [38]. We follow the same closed-book setting in GPT-3 [1], where the models are not allowed to access any external knowledge when answering open-domain factoid questions about broad factual knowledge.…”
Section: Task Descriptionmentioning
confidence: 99%
“…When answer type is multi-span, ms represents the sequence labels of this answer, otherwise null. We adopt the B, I, O scheme to indicate multi-span answer (Li et al, 2016) in which ms = (n 1 , . .…”
Section: Mrc Modelmentioning
confidence: 99%