Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.331
|View full text |Cite
|
Sign up to set email alerts
|

Neural Unification for Logic Reasoning over Natural Language

Abstract: Automated Theorem Proving (ATP) deals with the development of computer programs being able to show that some conjectures (queries) are a logical consequence of a set of axioms (facts and rules). There exists several successful ATPs where conjectures and axioms are formally provided (e.g. formalised as First Order Logic formulas). Recent approaches, such as , have proposed transformer-based architectures for deriving conjectures given axioms expressed in natural language (English). The conjecture is verified th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…The model outputs a single reasoning step per call. Each generated step is concatenated to the past input, and the model again generates the next step (i.e., proofwriter style) (Liang et al, 2021;Sanyal et al, 2022;Picco et al, 2021;Tafjord et al, 2021;Shwartz et al, 2020) . This process is iterated until the model outputs the answer or until a set maximum number of iterations is reached (100).…”
Section: Step-by-stepmentioning
confidence: 99%
See 3 more Smart Citations
“…The model outputs a single reasoning step per call. Each generated step is concatenated to the past input, and the model again generates the next step (i.e., proofwriter style) (Liang et al, 2021;Sanyal et al, 2022;Picco et al, 2021;Tafjord et al, 2021;Shwartz et al, 2020) . This process is iterated until the model outputs the answer or until a set maximum number of iterations is reached (100).…”
Section: Step-by-stepmentioning
confidence: 99%
“…Note that this strategy typically derives a long reasoning chain; from an engineering perspective, this strategy is inefficient. Backward chaining: The model starts from the equation for the target variable and backtracks over the dependent equations until it reaches a known value (Picco et al, 2021;Rocktäschel and Riedel, 2017;Cingillioglu and Russo, 2019). Then, it solves each equation in order by inserting known or calculated values until the target one is reached.…”
Section: Exhaustive Chainingmentioning
confidence: 99%
See 2 more Smart Citations
“…Our system is a kind of general model programs (Dohan et al, 2022)especially those with verification models (Cobbe et al, 2021)-which use language models inside as probabilistic programs and apply disparate inference algorithms to the models. Other kinds of approaches to use LMs for reasoning include training discriminative models Picco et al, 2021;Ghosal et al, 2022;Zhang et al, 2023), prompting GPT-3 with spelled-out reasoning procedure (Wei et al, 2022;Talmor et al, 2020), and distilling GPT-3.5 (Fu et al, 2023).…”
Section: Related Workmentioning
confidence: 99%