Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP 2017
DOI: 10.18653/v1/w17-5301
|View full text |Cite
|
Sign up to set email alerts
|

The RepEval 2017 Shared Task: Multi-Genre Natural Language Inference with Sentence Representations

Abstract: This paper presents the results of the RepEval 2017 Shared Task, which evaluated neural network sentence representation learning models on the MultiGenre Natural Language Inference corpus (MultiNLI) recently introduced by Williams et al. (2017). All of the five participating teams beat the bidirectional LSTM (BiLSTM) and continuous bag of words baselines reported in Williams et al.. The best single model used stacked BiLSTMs with residual connections to extract sentence features and reached 74.5% accuracy on t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 67 publications
(62 citation statements)
references
References 15 publications
0
62
0
Order By: Relevance
“…These examples are included in the distributed corpus, but are marked with '-' in the gold label field, and should not be used in standard evaluations. (Nangia et al, 2017).…”
Section: Data Collectionmentioning
confidence: 99%
“…These examples are included in the distributed corpus, but are marked with '-' in the gold label field, and should not be used in standard evaluations. (Nangia et al, 2017).…”
Section: Data Collectionmentioning
confidence: 99%
“…The baseline model we used here is introduced by (Williams et al, 2017) accompanied with the publication of MNLI corpus. It has a 5-layer structure which is shown in Figure 1.…”
Section: Bilstm Baselinementioning
confidence: 99%
“…We evaluated our approach on the Multi-Genre NLI (MNLI) corpus, as a shared task for RepEval 2017 workshop (Nangia et al, 2017). We train our CIAN model on a mixture of MNLI and SNLI corpus, by using a full MNLI training set and a randomly selected 20 percent of the SNLI training set at each epoch.…”
Section: Datamentioning
confidence: 99%
See 2 more Smart Citations