2018
DOI: 10.1007/978-3-319-96812-4_11
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing ENIGMA Given Clause Guidance

Abstract: ENIGMA is an efficient implementation of learning-based guidance for given clause selection in saturation-based automated theorem provers. In this work, we describe several additions to this method. This includes better clause features, adding conjecture features as the proof state characterization, better data pre-processing, and repeated model learning. The enhanced ENIGMA is evaluated on the MPTP2078 dataset, showing significant improvements.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 18 publications
(39 citation statements)
references
References 7 publications
0
39
0
Order By: Relevance
“…Various possible choices of efficient clause features for theorem prover guidance have been experimented with [14,15,21,22]. The original ENIGMA [14] uses term-tree walks of length 3 as features, while the second version [15] reaches better results by employing various additional features. In particular, the following types of features are used (see [14,Sec Since there are only finitely many features in any training data, the features can be serially numbered.…”
Section: Enigma Clause Featuresmentioning
confidence: 99%
See 2 more Smart Citations
“…Various possible choices of efficient clause features for theorem prover guidance have been experimented with [14,15,21,22]. The original ENIGMA [14] uses term-tree walks of length 3 as features, while the second version [15] reaches better results by employing various additional features. In particular, the following types of features are used (see [14,Sec Since there are only finitely many features in any training data, the features can be serially numbered.…”
Section: Enigma Clause Featuresmentioning
confidence: 99%
“…When the number of features is greater than the base-which is our case as we intend to use hashing for dimensionality reduction-collisions are inevitable. When using hash base of 32000 (ca 2 15 ) there are almost as many hashing buckets as there are features in the training data (31675). Out of these features, ca 12000 features are hashed without a collision and 12000 buckets are unoccupied.…”
Section: Evaluation Of Feature Hashingmentioning
confidence: 99%
See 1 more Smart Citation
“…Various machine learning methods can handle numeric vectors and their success heavily depends on the selection of correct clause features. Various possible choices of efficient clause features for theorem prover guidance have been experimented with [6,7,10,11]. The original ENIGMA [6] uses term-tree walks of length 3 as features, while the second version [7] reaches better results by employing various additional features.…”
Section: Enigma: Learning From Successful Proof Searchesmentioning
confidence: 99%
“…This work proposes and develops a new learning-based proof guidance -ENIGMAWatch -for saturation-style first-order theorem provers. It is based on two previous guiding methods implemented for the E [13] ATP system: ProofWatch [4] and ENIGMA [7,8]. Both ProofWatch and ENIGMA enable E to use related proofs for guiding the proof search for a new conjecture.…”
Section: Introductionmentioning
confidence: 99%