2009
DOI: 10.1007/978-3-642-01181-8_6
|View full text |Cite
|
Sign up to set email alerts
|

On Improving Generalisation in Genetic Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 8 publications
0
22
0
Order By: Relevance
“…Feature generation is the process of deriving new features from existing features (Guo, Jack, and Nandi 2005). In this technique, an evolutionary algorithm is used to generate and combine results of multiple independently discovered expression, e.g., by using a linear combination of GP results (Keijzer 2004;Costelloe and Ryan 2009), or by using non-linear function estimators applied to GE (de Silva, Noorian, Davis, and Leong 2013). This can be considered a type of machine learning and symbolic regression hybrid, as the final learning model is constructed from combination of simpler features created through a process similar to symbolic regression.…”
Section: Symbolic Regression and Feature Generationmentioning
confidence: 99%
“…Feature generation is the process of deriving new features from existing features (Guo, Jack, and Nandi 2005). In this technique, an evolutionary algorithm is used to generate and combine results of multiple independently discovered expression, e.g., by using a linear combination of GP results (Keijzer 2004;Costelloe and Ryan 2009), or by using non-linear function estimators applied to GE (de Silva, Noorian, Davis, and Leong 2013). This can be considered a type of machine learning and symbolic regression hybrid, as the final learning model is constructed from combination of simpler features created through a process similar to symbolic regression.…”
Section: Symbolic Regression and Feature Generationmentioning
confidence: 99%
“…Costelloe and Ryan [1] experimentally concluded that GP with linear scaling may perform better compared to standard GP on training data, but the technique does not generalize well on test data. They proposed to combine "No Same Mate" selection with linear scaling to improve generalization ability of evolved GP solutions.…”
Section: F "Linear Scaling" With "No Same Mate" Selectionmentioning
confidence: 99%
“…For example, the "Quartic Polynomial" problem (using the function f (x) = x 4 +x 3 +x 2 +x) is probably the most used benchmark in GP [15].…”
Section: A Symbolic Regressionmentioning
confidence: 99%