2021
DOI: 10.1007/978-3-030-86059-2_11
|View full text |Cite
|
Sign up to set email alerts
|

lazyCoP: Lazy Paramodulation Meets Neurally Guided Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…The same is true for non saturation-based RL-friendly provers too (e.g. lazyCoP, Rawson & Reger (2021)). This monolithic approach hinders free experimentation with novel machine learning (ML) models and RL algorithms and creates unnecessary complications for ML and RL experts willing to contribute to the field.…”
Section: Statement Of Needmentioning
confidence: 78%
See 2 more Smart Citations
“…The same is true for non saturation-based RL-friendly provers too (e.g. lazyCoP, Rawson & Reger (2021)). This monolithic approach hinders free experimentation with novel machine learning (ML) models and RL algorithms and creates unnecessary complications for ML and RL experts willing to contribute to the field.…”
Section: Statement Of Needmentioning
confidence: 78%
“…Guiding provers with RL is a hot topic. Recent projects in this domain include TRAIL (Trial Reasoner for AI that Learns) [2], FLoP (Finding Longer Proofs) [37], and lazyCoP [26]. We will now compare the new gym-saturation features with these three projects.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Row TABX shows for comparison the number of problems that was solved by at least one of the following provers: CMProver [39,40] in at least one of several considered configurations, SETHEO 3.3 [24], S-SETHEO [14], lazyCoP 0.1 [30] and SATCoP 0.1 [31]. 14 The next two rows indicate the numbers of problems that can be solved either by SGCD in at least one of the two goal-driven configurations or by at least one of the five contributors to TABX, but not both.…”
Section: Corpus Tptpcdt2mentioning
confidence: 99%
“…In recent years, machine learning (ML) and neural methods have been increasingly used to guide the search procedures of automated theorem provers (ATPs). Such methods have been so far developed for choosing inferences in connection tableaux systems [50,27,29,37,51], resolution/superposition-based systems [24,23,20,49], SAT solvers [48], tactical ITPs [17,3,5,18,30,42,40] and most recently also for the iProver [31] instantiation-based system [9]. In SMT (Satisfiability Modulo Theories), ML has so far been mainly used for tasks such as portfolio and strategy optimization [47,36,2].…”
Section: Introductionmentioning
confidence: 99%