2020
DOI: 10.48550/arxiv.2010.11328
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Logic Guided Genetic Algorithms

Dhananjay Ashok,
Joseph Scott,
Sebastian Wetzel
et al.

Abstract: We present a novel Auxiliary Truth enhanced Genetic Algorithm (GA) that uses logical or mathematical constraints as a means of data augmentation as well as to compute loss (in conjunction with the traditional MSE), with the aim of increasing both data efficiency and accuracy of symbolic regression (SR) algorithms. Our method, logic-guided genetic algorithm (LGGA), takes as input a set of labelled data points and auxiliary truths (AT) (mathematical facts known a priori about the unknown function the regressor a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…A similar approach called Logic Guided Genetic Algorithms (LGGA) was proposed by Ashok et al (2020). Here, the domain-specific knowledge is called auxiliary truths (AT) that are simple mathematical facts known a priori about the unknown function sought.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A similar approach called Logic Guided Genetic Algorithms (LGGA) was proposed by Ashok et al (2020). Here, the domain-specific knowledge is called auxiliary truths (AT) that are simple mathematical facts known a priori about the unknown function sought.…”
Section: Related Workmentioning
confidence: 99%
“…Other approaches dynamically change the training data set through the course of the modeling process. In particular, counterexamples on which otherwise promising models fail to be consistent with prior knowledge are generated on the fly and used to drive the search towards valid models that comply with prior knowledge (Bł ądek & Krawiec, 2019;Ashok et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…As a black-box, we will not change the internal structure of NN throughout the experiments, i.e., we will not purposely design unique NN structures to fit different data, instead, we always use the same and simple three-layer MLP as the black-box and we only designate the input and output data for the black-box to learn different prediction models. We use TuringBot [33] for model explanation based on symbolic regression, which is a widely used symbolic regression algorithm based on simulated annealing and performed well on a variety of physics-inspired learning problems [45].…”
Section: Ai and Explainable Ai Modelsmentioning
confidence: 99%