2017
DOI: 10.48550/arxiv.1707.03904
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Quasar: Datasets for Question Answering by Search and Reading

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
77
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 46 publications
(77 citation statements)
references
References 0 publications
0
77
0
Order By: Relevance
“…The encoder uses self-attention to conditionally encode the context with the query and the decoder allows conditional generation of outputs that are not necessarily present in the input. To scale question answering to reason over large knowledge-sources such as Wikipedia, task formulations typically retrieve textspans from a corpus to condition answer generation (Chen et al, 2017;Dhingra et al, 2017). However, several challenges encountered in NLDBs preclude direct application of these techniques: Scale To scale neural reasoning to databases of non-trivial size, it would not be feasible to encode the entire database as input to the transformer.…”
Section: Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…The encoder uses self-attention to conditionally encode the context with the query and the decoder allows conditional generation of outputs that are not necessarily present in the input. To scale question answering to reason over large knowledge-sources such as Wikipedia, task formulations typically retrieve textspans from a corpus to condition answer generation (Chen et al, 2017;Dhingra et al, 2017). However, several challenges encountered in NLDBs preclude direct application of these techniques: Scale To scale neural reasoning to databases of non-trivial size, it would not be feasible to encode the entire database as input to the transformer.…”
Section: Challengesmentioning
confidence: 99%
“…Machines have surpassed human performance on the well-known SQUaD task (Rajpurkar et al, 2016) where models extract answer spans from a short passage of text. The subsequent body of work has further considered incorporating retrieval from large corpora such as Wikipedia (Dhingra et al, 2017;Joshi et al, 2017;Kwiatkowski et al, 2019) to identify relevant information, conditioning answer generation (Chen -Sarah works in a hospital in NY as a doctor.…”
Section: Introductionmentioning
confidence: 99%
“…Previous open-book QA methods first filter a large corpus to a small set of relevant documents using information retrieval (Karpukhin et al, 2020;Robertson and Zaragoza, 2009). The document set then provides context for answering questions (Dhingra et al, 2017;Dunn et al, 2017;Joshi et al, 2017;Nguyen et al, 2016). Conversely, closed-book QA instead requires models to answer using only their implicit knowledge .…”
Section: Related Workmentioning
confidence: 99%
“…For the RC regime they use DRQA's document reader [7] while for the Open QA they utilize the PSPR model [27]. They experiment with different datasets (SQUAD [49] for RC and Quasar-T [10] for Open QA) for fine-tuning the models, as well as BioBert [26] embeddings to gain insights on the effect of the context length in this task.…”
Section: Task 7bmentioning
confidence: 99%