2013
DOI: 10.1016/j.neucom.2013.04.005
|View full text |Cite
|
Sign up to set email alerts
|

Fast learning neural networks using Cartesian genetic programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 82 publications
(12 citation statements)
references
References 50 publications
0
12
0
Order By: Relevance
“…Koza and Rice [1991] brought a new dimension in this area, by using genetic evolution to optimize both, the weight and structure of a neural network. There have been many other works in this area [Palmes, Hayasaka, and Usui, 2005, Rivero, Dorado, Rabuñal, and Pazos, 2010, Tsai and Lin, 2011, Turner, Dudek, and Ritchie, 2010, Mahsal Khan, Masood Ahmad, Muhammad Khan, and Miller, 2013 however; none of the modifications is capable of delivering satisfactory performance for all problems, in general. Thus, the search for an approach to speed up its convergence and find the optimal structure still remain important.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…Koza and Rice [1991] brought a new dimension in this area, by using genetic evolution to optimize both, the weight and structure of a neural network. There have been many other works in this area [Palmes, Hayasaka, and Usui, 2005, Rivero, Dorado, Rabuñal, and Pazos, 2010, Tsai and Lin, 2011, Turner, Dudek, and Ritchie, 2010, Mahsal Khan, Masood Ahmad, Muhammad Khan, and Miller, 2013 however; none of the modifications is capable of delivering satisfactory performance for all problems, in general. Thus, the search for an approach to speed up its convergence and find the optimal structure still remain important.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…To train SLFN (single hidden layer feedforward neural network) and to optimize the weight of SLFN, an algorithm is proposed by that hybridize the selforganizing map (SOM) algorithm with ELM algorithm [30]. CGP-based Artificial Neural Network (CGPANN), based on the Cartesian genetic programming (CGP) technique, is a fast-learning neuroevolutionary algorithm applicable for both feedforward and recurrent networks [31]. Even though the following methods produce good outcomes, they are computationally intensive.…”
Section: Related Studymentioning
confidence: 99%
“…Theoretically, due to universal approximation theorem [9], this is the most expressive activation function; however, it requires many more parameters. Some of the early attempts to learn activation functions in neural networks can be found in Poli [39], Weingaertner et al [50], and Khan et al [22], where the authors proposed learning the best activation function per neuron among a pool of candidate activation functions using genetic and evolutionary algorithms. Maxout [15] has been introduced as an activation function aimed at enhancing the model averaging properties of dropout [45].…”
Section: Related Workmentioning
confidence: 99%