2015
DOI: 10.1007/978-3-319-24246-0_21
|View full text |Cite
|
Sign up to set email alerts
|

Lemmatization for Stronger Reasoning in Large Theories

Abstract: Abstract. In this work we improve ATP performance in large theories by the reuse of lemmas derived in previous related problems. Given a large set of related problems to solve, we run automated theorem provers on them, extract a large number of lemmas from the proofs found and post-process the lemmas to make them usable in the remaining problems. Then we filter the lemmas by several tools and extract their proof dependencies, and use machine learning on such proof dependencies to add the most promising generat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…This and most other approaches (e.g. [15]) exclusively use human selected features. A related problem is that of automatically proposing interesting conjectures and theory exploration.…”
Section: Theorem Proving Problems and Artificial Intelligencementioning
confidence: 99%
“…This and most other approaches (e.g. [15]) exclusively use human selected features. A related problem is that of automatically proposing interesting conjectures and theory exploration.…”
Section: Theorem Proving Problems and Artificial Intelligencementioning
confidence: 99%
“…Several machine learning methods for proof search also re-use clauses. In [8] lemma selection from the derived clauses is used between related problems, while in [3] a similar mechanism, called leapfrogging, is used for E/ENIGMA between repeated runs on the same problem. The latter paper describes the use of a trained graph-based predictor to select a promising subset of the clauses processed during a previous run.…”
Section: Introductionmentioning
confidence: 99%