Proceedings of the 2020 Genetic and Evolutionary Computation Conference 2020
DOI: 10.1145/3377930.3390152
|View full text |Cite
|
Sign up to set email alerts
|

Symbolic regression driven by training data and prior knowledge

Abstract: In symbolic regression, the search for analytic models is typically driven purely by the prediction error observed on the training data samples. However, when the data samples do not sufficiently cover the input space, the prediction error does not provide sufficient guidance toward desired models. Standard symbolic regression techniques then yield models that are partially incorrect, for instance, in terms of their steady-state characteristics or local behavior. If these properties were considered already dur… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(21 citation statements)
references
References 24 publications
0
21
0
Order By: Relevance
“…Depending on the problem, the function set can be arbitrarily enriched, for example by adding trigonometric functions. Furthermore, it is easy to use additional prior knowledge and constraints in SR methods in order to generate models with desired properties, see [33], [34].…”
Section: Ease Of Usementioning
confidence: 99%
“…Depending on the problem, the function set can be arbitrarily enriched, for example by adding trigonometric functions. Furthermore, it is easy to use additional prior knowledge and constraints in SR methods in order to generate models with desired properties, see [33], [34].…”
Section: Ease Of Usementioning
confidence: 99%
“…Auguste et al presented two new methods to include monotonic constraints in regression and classification trees [1]. In [9] the authors present a multi-objective symbolic regression approach to minimize the approximation error on the training data as well as the constraint violations on the constraint dataset. Therefore, they extended the NSGA-II algorithm and used sampling to evaluate the constraints.…”
Section: Related Workmentioning
confidence: 99%
“…They use a satisfiability solver to check if each candidate model fulfills shape or symmetry constraints by looking for counter-examples of the desired constraint. Kubalík et al [19] use multi-objective GP to minimize the approximation error of the training data and minimize the constraints for constraint data. The constraint data are artificially generated to capture the desired constraint.…”
Section: Related Workmentioning
confidence: 99%