Proceedings of ACL 2018, System Demonstrations 2018
DOI: 10.18653/v1/p18-4005
|View full text |Cite
|
Sign up to set email alerts
|

Jack the Reader – A Machine Reading Framework

Abstract: Many Machine Reading and Natural Language Understanding tasks require reading supporting text in order to answer questions. For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions. Providing a set of useful primitives operating in a single framework of related tasks would allow for expressive modelling, and easier model comparison and replication.To that end, we present … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…We also experimented with the Decomposable Attention Model (DAM) (Parikh et al, 2016) -as used in the baseline model, however ESIM consistently performed better. The Jack the Reader (Weissenborn et al, 2018) framework was used for both DAM and ESIM. We first pre-trained the ESIM model on the Stanford Natural Language Inference (SNLI) corpus (Bowman et al, 2015), and then fine-tuned on the FEVER dataset.…”
Section: Natural Language Inference (Nli)mentioning
confidence: 99%
“…We also experimented with the Decomposable Attention Model (DAM) (Parikh et al, 2016) -as used in the baseline model, however ESIM consistently performed better. The Jack the Reader (Weissenborn et al, 2018) framework was used for both DAM and ESIM. We first pre-trained the ESIM model on the Stanford Natural Language Inference (SNLI) corpus (Bowman et al, 2015), and then fine-tuned on the FEVER dataset.…”
Section: Natural Language Inference (Nli)mentioning
confidence: 99%
“…Strictly speaking, almost any NLP task can be formulated as question answering, and this is already being leveraged for model reuse and multi-task learning [e.g. 168,271] and zero-shot learning [e.g. 1,143].…”
Section: Task Versus Formatmentioning
confidence: 99%
“…Following Welbl et al (2017), we use two neural QA models, namely BIDAF (Seo et al, 2016a) and FASTQA (Weissenborn et al, 2017b), as baselines for the considered WIKIHOP predicates. We use the implementation provided by the JACK 2 QA framework (Weissenborn et al, 2018) with the same hyperparameters as used by Welbl et al (2017), and train a separate model for each predicate. 3 To ensure that the performance of the baseline is not adversely affected by the relatively small number of training examples, we also evaluate the BIDAF model trained on the whole WIK-IHOP corpus.…”
Section: Baselinesmentioning
confidence: 99%