2021
DOI: 10.1007/978-3-030-79876-5_30
|View full text |Cite
|
Sign up to set email alerts
|

Neural Precedence Recommender

Abstract: The state-of-the-art superposition-based theorem provers for first-order logic rely on simplification orderings on terms to constrain the applicability of inference rules, which in turn shapes the ensuing search space. The popular Knuth-Bendix simplification ordering is parameterized by symbol precedence—a permutation of the predicate and function symbols of the input problem’s signature. Thus, the choice of precedence has an indirect yet often substantial impact on the amount of work required to complete a pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…This is essential because the chance of deriving a contradiction reduces considerably as the number of clauses grows. Machine learning is currently being used to discover efficient heuristics [11,12,26], and to intelligently operate internal heuristic components [4,6,8,25].…”
Section: Introductionmentioning
confidence: 99%
“…This is essential because the chance of deriving a contradiction reduces considerably as the number of clauses grows. Machine learning is currently being used to discover efficient heuristics [11,12,26], and to intelligently operate internal heuristic components [4,6,8,25].…”
Section: Introductionmentioning
confidence: 99%
“…In the clause and symbol nodes, the initial embedding is augmented with the metadata extracted from the input problem. 2 Four message-passing layers follow: In each of these layers, each node embedding is updated by aggregating messages incoming from the node's neighbors. Each message is the output of a trainable affine transformation of the embedding of the source node.…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…More precisely, the initial embedding of a node is the concatenation of a node-specific metadata vector (empty in all nodes except clause and symbol nodes; see Section 4.1), and a trainable vector common to all nodes of one type 3. The architecture and hyperparameters of the GNN are the same as those used in our earlier work on recommending symbol precedences[2] which we refer the reader to for additional details.…”
mentioning
confidence: 99%