2009
DOI: 10.1080/17445760802660387
|View full text |Cite
|
Sign up to set email alerts
|

On dynamical genetic programming: simple Boolean networks in learning classifier systems

Abstract: Abstract. Many representations have been presented to enable the effective evolution of computer programs. Turing was perhaps the first to present a general scheme by which to achieve this end.Significantly, Turing proposed a form of discrete dynamical system and yet dynamical representations remain almost unexplored within conventional genetic programming. This paper presents results from an initial investigation into using simple dynamical genetic programming representations within a Learning Classifier Syst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
15
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
6
1
1

Relationship

4
4

Authors

Journals

citations
Cited by 19 publications
(16 citation statements)
references
References 51 publications
1
15
0
Order By: Relevance
“…This general result has long been predicted by Kauffman [34] and is now supported by data from real GRN which appear to be relatively sparsely connected: on average it seems 1.5 ≤ B ≤ 2 (see [39] for discussions). The aforementioned work evolving RBN for machine learning problems found a B value of around 2 typically emerged also [11]. This result can be seen to extend that previously reported by [48] who also found longer attractors harder to evolve.…”
Section: Resultssupporting
confidence: 83%
See 2 more Smart Citations
“…This general result has long been predicted by Kauffman [34] and is now supported by data from real GRN which appear to be relatively sparsely connected: on average it seems 1.5 ≤ B ≤ 2 (see [39] for discussions). The aforementioned work evolving RBN for machine learning problems found a B value of around 2 typically emerged also [11]. This result can be seen to extend that previously reported by [48] who also found longer attractors harder to evolve.…”
Section: Resultssupporting
confidence: 83%
“…In the above, fitness is calculated from the state of the N trait nodes on the step after T network update cycles, i.e., within an attractor (see [11]). A number of examples of related previous work discussed in Section IV have sought to evolve temporal behavior, i.e., particular sequences of gene activity wherein the action of the GRN is sampled on every update cycle, i.e., up to and potentially including within an attractor.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…A number of representations have been previously presented beyond this scheme however, including real numbers, fuzzy logic, genetic programming, and artificial neural networks. Recently, we [2] [3] investigated the use of a Dynamical Genetic Programming representation scheme (DGP) [4] within Learning Classifier Systems. It was shown possible to evolve ensembles of Random Boolean Networks (RBN) [5] as a discrete, temporally dynamic, knowledge representation scheme within LCS.…”
Section: Introductionmentioning
confidence: 99%
“…The same approach has been used to explore attractor stability [15] and to model real regulatory network data, eg, see [16] for an example using probabilistic RBN. Sipper and Ruppin [17] evolved RBN for the well-known density task and Bull [18] has evolved RBN ensembles to solve benchmark machine learning problems. RBN appear to have useful properties for computation, such as fault tolerance (eg, see [19]).…”
Section: Introductionmentioning
confidence: 99%