2018
DOI: 10.1007/978-3-319-99259-4_34
|View full text |Cite
|
Sign up to set email alerts
|

Lamarckian Evolution of Convolutional Neural Networks

Abstract: Convolutional neural networks belong to the most successul image classifiers, but the adaptation of their network architecture to a particular problem is computationally expensive. We show that an evolutionary algorithm saves training time during the network architecture optimization, if learned network weights are inherited over generations by Lamarckian evolution. Experiments on typical image datasets show similar or significantly better test accuracies and improved convergence speeds compared to two differe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…In [5], authors propose a population-based method that renders, at each generation, a set of architectures generated using local transformation. They use Lamarckian inheritance [15] to permit faster architecture evaluation. Their work, however, uses complex operators which makes it difficult to adapt to other tasks.…”
Section: B Multi-objective Neural Architecture Searchmentioning
confidence: 99%
“…In [5], authors propose a population-based method that renders, at each generation, a set of architectures generated using local transformation. They use Lamarckian inheritance [15] to permit faster architecture evaluation. Their work, however, uses complex operators which makes it difficult to adapt to other tasks.…”
Section: B Multi-objective Neural Architecture Searchmentioning
confidence: 99%
“…Furthermore, Johner et al [57] use a ranking function to choose individuals by rank. A selection trick termed as niching is used in [67], [126] to avoid stacking into local optima. This trick allows offspring worse than parent for several generations until evolving to a better one.…”
Section: B Selection Strategymentioning
confidence: 99%
“…Another way is reducing the population dynamically. For instance, Fan et al [122] use the (µ + λ) evolution Weight inheritance [19], [36], [38], [40], [55], [56], [62], [67], [77], [78], [84], [86], [88], [105], [109], [111], [120], [121], [125], [132] Early stopping policy [21], [32], [36], [42], [43], [62], [65], [71], [79], [80], [83], [86], [86], [95], [98], [103], [108], [112], [114], [125], [145] Reduced training set [46], [84], [124], [129], [188] Reduced population [98],…”
Section: Shorten the Evaluation Timementioning
confidence: 99%
See 1 more Smart Citation
“…Liu et al [29] suggested using a genetic algorithm for evolving individuals using mutation by adding, removing, or editing edges in a computation graph which can be translated into a convolutional neural network. Finally, first Kramer [30] and later Prellberg and Kramer [31] have presented an approach based on an evolutionary algorithm that relies only on the mutation operator and have introduced a mechanism to support parameters inheritance, so that descendants during the evolutionary process do not need to learn weights from scratch. Assuno et al [32] have presented DENSER, a work where a multi-level encoding of candidate solutions allow for the optimization of the topology of the network and the activation functions, with authors claiming that it can be used also to evolve the hyperparameters of the learning process as well as of the data augmentation stage.…”
Section: Complexitymentioning
confidence: 99%