2015
DOI: 10.1007/978-3-662-48899-7_7
|View full text |Cite
|
Sign up to set email alerts
|

FEMaLeCoP: Fairly Efficient Machine Learning Connection Prover

Abstract: Abstract. FEMaLeCoP is a connection tableau theorem prover based on leanCoP which uses efficient implementation of internal learningbased guidance for extension steps. Despite the fact that exhaustive use of such internal guidance can incur a significant slowdown of the raw inferencing process, FEMaLeCoP trained on related proofs can prove many problems that cannot be solved by leanCoP. In particular on the MPTP2078 benchmark, FEMaLeCoP adds 90 (15.7%) more problems to the 574 problems that are provable by lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
45
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 44 publications
(45 citation statements)
references
References 13 publications
0
45
0
Order By: Relevance
“…They are precalculated to allow fast classification. Furthermore, new training examples can be added to existing classification data efficiently, similarly to [KU15].…”
Section: Generalised Classifiersmentioning
confidence: 99%
See 1 more Smart Citation
“…They are precalculated to allow fast classification. Furthermore, new training examples can be added to existing classification data efficiently, similarly to [KU15].…”
Section: Generalised Classifiersmentioning
confidence: 99%
“…The hints technique [Ver96] was among the earliest attempts to directly influence proof search by learning from previous proofs. Other systems are E/TSM [Sch00], an extension of E [Sch13] with term space maps, and MaLeCoP [UVŠ11] respectively FEMaLeCoP [KU15], which are versions of leanCoP [Ott08] extended by Naive Bayesian learning. -Learning of strategies: Finding good settings for ATPs automatically has been researched for example in the Blind Strategymaker (BliStr) project [Urb15].…”
Section: Introductionmentioning
confidence: 99%
“…the next tableau extension step [40,20] and first experiments with Monte-Carlo guided proof search [8] and reinforcement learning [21] have been done.…”
mentioning
confidence: 99%
“…Learning from many proofs has also recently become a very useful method for automated finding of parameters of ATP strategies [22,9,19,16], and for learning of sequences of tactics in interactive theorem provers (ITPs) [7]. Several experiments with the compact leanCoP [18] system have recently shown that directly using trained machine learner for internal clause selection can significantly prune the search space and solve additional problems [24,11,5]. An obvious next step is to implement efficient learning-based clause selection also inside the strongest superposition-based ATPs.In this work, we introduce ENIGMA -Efficient learNing-based Internal Guidance MAchine for state-of-the-art saturation-based ATPs.…”
mentioning
confidence: 99%