2020 IEEE Congress on Evolutionary Computation (CEC) 2020
DOI: 10.1109/cec48606.2020.9185851
|View full text |Cite
|
Sign up to set email alerts
|

Evolving Deep Recurrent Neural Networks Using A New Variable-Length Genetic Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…FFNN optimization is proposed in [32], CNN optimization in [15]- [17,33] and RNN optimization in [33,34] is translated into single-objective using the scalarization method in [32] or Pareto Dominance (PD) approach in [16]. The scalarization method is very sensitive to the weighting of the objectives.…”
Section: B Supportive Combinationmentioning
confidence: 99%
See 1 more Smart Citation
“…FFNN optimization is proposed in [32], CNN optimization in [15]- [17,33] and RNN optimization in [33,34] is translated into single-objective using the scalarization method in [32] or Pareto Dominance (PD) approach in [16]. The scalarization method is very sensitive to the weighting of the objectives.…”
Section: B Supportive Combinationmentioning
confidence: 99%
“…It may require a large number of iterations to converge to a small part of the Pareto optimal front. Third, considering the way of DNN architectures building, there are two categories: layer stacking [15,17,32]- [34] and block stacking [16] approaches. The layer stacking approach is not preferred for complex classification problems.…”
Section: B Supportive Combinationmentioning
confidence: 99%
“…2) Supportive combination: In this stream of work, GAs are used to optimize the connections and hyperparameters of the DNN architecture, while the weights are optimized using other algorithms such as the back-propagation [24]. FFNN optimization is proposed in [25], CNN optimization in [10,20,26,27] and RNN optimization in [26,28]. Mostly considered hyperparameters are the number of hidden layers, learning rate, type of optimizer, number of filters, layers' positions, and activation functions.…”
Section: Related Workmentioning
confidence: 99%
“…We can distinguish a few different research approaches within this stream. First, considering the network depth, we can separate them into two categories: fixed [10,27] and variable [20,26,28] network depth approaches. While the former might waste computational power in cases when the network depth is set to a value higher than optimal, the latter tries to find the optimal network depth and, thus, provides more computationally efficient candidate solutions.…”
Section: Related Workmentioning
confidence: 99%
“…Li et al provide a deep insight into the variable-length multiobjective optimization problems [23]. Viswambaran et al employ a GA with variable-length chromosomes to evolve deep recurrent neural networks (DRNNs) by using the variablelength encoding strategy to represent DRNNs of different depths [42]. The common characteristic of these graphs is that their feasible structures consist of an unfixed or unknown number of nodes or layers.…”
Section: Introductionmentioning
confidence: 99%