2002
DOI: 10.1007/3-540-46033-0_22
|View full text |Cite
|
Sign up to set email alerts
|

Applying Boosting Techniques to Genetic Programming

Abstract: This article deals with an improvement for genetic programming based on a technique originating from the machine learning field: boosting. In a first part of this paper, we test the improvements offered by boosting on binary problems. Then we propose to deal with regression problems, and propose an algorithm, called GPboost, that keeps closer to the original idea of distribution in Adaboost than what has been done in previous implementation of boosting for genetic programming.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2004
2004
2017
2017

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(25 citation statements)
references
References 8 publications
0
25
0
Order By: Relevance
“…Boosting practices have been effectively used to enhance the functioning of other recognized processes from the Machine Learning field [12], such as ANN, and GP. Su et al [13] explained the neural network techniques in projects from mathematical point of view of software reliability modeling.…”
Section: 'Smentioning
confidence: 99%
“…Boosting practices have been effectively used to enhance the functioning of other recognized processes from the Machine Learning field [12], such as ANN, and GP. Su et al [13] explained the neural network techniques in projects from mathematical point of view of software reliability modeling.…”
Section: 'Smentioning
confidence: 99%
“…It has been extensively used in neural networks (e.g., [7,14,17,18,35]) and even more extensively in boosting and machine learning in general (albeit, mostly for classification). See [2,4,5,8,22,24,29] for examples. Krogh and Vedelsby [14] presented the idea of using disagreement of ensemble models for quantifying the ambiguity of ensemble prediction for neural networks, but the approach has not been adapted to symbolic regression.…”
Section: Ensemble Selectionmentioning
confidence: 99%
“…The resulting solutions were found to represent sets of weak learners making post-training evaluation of team member roles difficult [22,23]. Other approaches evolved ensembles of learners using an island approach [10,20] in which mechanisms for diversity maintenance (and thus cooperation) were not as direct as they are in an explicitly coevolutionary model. Moreover, these approaches do not scale to producing large teams or training on large datasets since they require one run to generate each individual in the final solution.…”
Section: Introductionmentioning
confidence: 99%