Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2016
DOI: 10.18653/v1/p16-2022
|View full text |Cite
|
Sign up to set email alerts
|

Natural Language Inference by Tree-Based Convolution and Heuristic Matching

Abstract: In this paper, we propose the TBCNNpair model to recognize entailment and contradiction between two sentences. In our model, a tree-based convolutional neural network (TBCNN) captures sentencelevel semantics; then heuristic matching layers like concatenation, element-wise product/difference combine the information in individual sentences. Experimental results show that our model outperforms existing sentence encoding-based approaches by a large margin.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
225
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 309 publications
(225 citation statements)
references
References 22 publications
0
225
0
Order By: Relevance
“…If we remove word-level embedding, the accuracies drop to 65.6% and 66.0%. If we reModel Test LSTM (Bowman et al, 2015) 80.6 GRU (Vendrov et al, 2015) 81.4 Tree CNN (Mou et al, 2016) 82.1 SPINN-PI (Bowman et al, 2016) 83.2 NTI (Munkhdalai and Yu, 2016b) 83.4 Intra-Att BiLSTM (Liu et al, 2016) 84.2 Self-Att BiLSTM (Lin et al, 2017) 84.2 NSE (Munkhdalai and Yu, 2016a) 84.6 Gated-Att BiLSTM 85.5 Table 2: Accuracies of the models on SNLI.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…If we remove word-level embedding, the accuracies drop to 65.6% and 66.0%. If we reModel Test LSTM (Bowman et al, 2015) 80.6 GRU (Vendrov et al, 2015) 81.4 Tree CNN (Mou et al, 2016) 82.1 SPINN-PI (Bowman et al, 2016) 83.2 NTI (Munkhdalai and Yu, 2016b) 83.4 Intra-Att BiLSTM (Liu et al, 2016) 84.2 Self-Att BiLSTM (Lin et al, 2017) 84.2 NSE (Munkhdalai and Yu, 2016a) 84.6 Gated-Att BiLSTM 85.5 Table 2: Accuracies of the models on SNLI.…”
Section: Resultsmentioning
confidence: 99%
“…However, in this paper we principally concentrate on sentence encoder-based model. Many researchers have studied sentence encoder-based model for natural language inference (Bowman et al, 2015;Vendrov et al, 2015;Mou et al, 2016;Bowman et al, 2016;Munkhdalai and Yu, 2016a,b;Liu et al, 2016;Lin et al, 2017). It is, however, not very clear if the potential of the sentence encoderbased model has been well exploited.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…If we remove word-level embedding, the accuracies drop to 65.6% and 66.0%. If we reModel Test LSTM 80.6 GRU (Vendrov et al, 2015) 81.4 Tree CNN (Mou et al, 2016) 82.1 SPINN-PI 83.2 NTI (Munkhdalai and Yu, 2016b) 83.4 Intra-Att BiLSTM 84.2 Self-Att BiLSTM 84.2 NSE (Munkhdalai and Yu, 2016a) 84.6 Gated-Att BiLSTM 85.5 Table 2: Accuracies of the models on SNLI.…”
Section: Resultsmentioning
confidence: 99%
“…However, in this paper we principally concentrate on sentence encoder-based model. Many researchers have studied sentence encoder-based model for natural language inference Vendrov et al, 2015;Mou et al, 2016;Munkhdalai and Yu, 2016a,b;. It is, however, not very clear if the potential of the sentence encoderbased model has been well exploited.…”
Section: Related Workmentioning
confidence: 99%