2016
DOI: 10.48550/arxiv.1611.01423
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Continuous Semantic Representations of Symbolic Expressions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…Related work Data-driven models have proven successful in various entailment recognition tasks (Baroni et al, 2012;Socher et al, 2012;Rocktäschel et al, 2014;Bowman et al, 2015b;Rocktäschel et al, 2015). The data sets used in research on this topic tend to be either fully formal, focusing on logic instead of natural language Allamanis et al, 2016), or fully natural, as is the case for manually annotated data sets of English sentence pairs such as SICK (Marelli et al, 2014) or SNLI (Bowman et al, 2015a). Moreover, entailment recognition models are often endowed with functionality reflecting pre-established linguistic or semantic regularities of the data (Bankova et al, 2016;Serafini and Garcez, 2016;Sadrzadeh et al, 2018).…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Related work Data-driven models have proven successful in various entailment recognition tasks (Baroni et al, 2012;Socher et al, 2012;Rocktäschel et al, 2014;Bowman et al, 2015b;Rocktäschel et al, 2015). The data sets used in research on this topic tend to be either fully formal, focusing on logic instead of natural language Allamanis et al, 2016), or fully natural, as is the case for manually annotated data sets of English sentence pairs such as SICK (Marelli et al, 2014) or SNLI (Bowman et al, 2015a). Moreover, entailment recognition models are often endowed with functionality reflecting pre-established linguistic or semantic regularities of the data (Bankova et al, 2016;Serafini and Garcez, 2016;Sadrzadeh et al, 2018).…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Second, their dataset consists of matrix expressions containing at most one variable, while our formulas contain many variables. Allamanis et al (2016) use a recursive neural network to learn whether two expressions are equivalent. They tested on two datasets: propositional logic and polynomials.…”
Section: Resultsmentioning
confidence: 99%
“…The fourth and fifth encoding benchmarks are (tree) recursive neural networks (Tai et al, 2015;Le & Zuidema, 2015;Zhu et al, 2015;Allamanis et al, 2016), also known as TreeRNNs. These recursively encode the logical expression using the parse structure ‡ , where leaf nodes of the tree (propositional variables) are embedded as learnable vectors, and each logical operator then combines one or more of these embedded values to produce a new embedding.…”
Section: Encoder Benchmarksmentioning
confidence: 99%
See 1 more Smart Citation
“…There are various papers with datasets with a discrete reasoning nature. Kaiser & Sutskever (2015) use an adapted convolutional architecture to solve addition and multiplication with good generalization; Allamanis et al (2016) and Evans et al (2018) use tree networks to predict polynomial or logical equivalence or logical entailment; Selsam et al (2018) uses message passing networks with a bipartite graph structure to decide satisfiability in formulas in conjunctive normal form, and so on. The difference between those problems and the dataset in this paper is that the former all have a single well-defined input structure that can be easily mapped into narrow architectures suited to the problem structure, avoiding the need for general reasoning skills like parsing or generic working memory.…”
Section: Related Workmentioning
confidence: 99%