2019
DOI: 10.1038/s42256-018-0006-z
|View full text |Cite
|
Sign up to set email alerts
|

Designing neural networks through neuroevolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
332
0
4

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 566 publications
(336 citation statements)
references
References 108 publications
0
332
0
4
Order By: Relevance
“…Thus, the probability for the transition truex¯itruex¯j is estimated at [normalΓτ]ij=Mijfalse(τfalse)Mi, where M ij (τ) is the number of transitions truex¯itruex¯j that occurred for the projected trajectory between t and t+τ within the time period 0tt0τ, and M i is the number of times state x¯i is visited within the same training time frame. Once this first estimation is at hand, the optimization of the transition operator requires deep learning as the projection of each MD‐generated microstate x(t0+nτ) for n = 1 ,…, W and W satisfying false(t0+Wτfalse)tf<false[t0+false(W+1false)τfalse] must be approximated as [normalΓτ]nπxfalse(t0false). In other words, the parametrization of the transition operator in quotient space is optimized to minimize the loss function L(normalΓτ) given by truerightscriptL(normalΓτ)=W1n=1Wfalse∥πx()t0+nτfalse[Γτfalse]nπx(t0)false∥2…”
Section: Theoretical Frameworkmentioning
confidence: 99%
See 2 more Smart Citations
“…Thus, the probability for the transition truex¯itruex¯j is estimated at [normalΓτ]ij=Mijfalse(τfalse)Mi, where M ij (τ) is the number of transitions truex¯itruex¯j that occurred for the projected trajectory between t and t+τ within the time period 0tt0τ, and M i is the number of times state x¯i is visited within the same training time frame. Once this first estimation is at hand, the optimization of the transition operator requires deep learning as the projection of each MD‐generated microstate x(t0+nτ) for n = 1 ,…, W and W satisfying false(t0+Wτfalse)tf<false[t0+false(W+1false)τfalse] must be approximated as [normalΓτ]nπxfalse(t0false). In other words, the parametrization of the transition operator in quotient space is optimized to minimize the loss function L(normalΓτ) given by truerightscriptL(normalΓτ)=W1n=1Wfalse∥πx()t0+nτfalse[Γτfalse]nπx(t0)false∥2…”
Section: Theoretical Frameworkmentioning
confidence: 99%
“…We know this loss function is the correct one since L(normalΓτ)=0 if and only if the transition operator makes the previous diagram commutative. Once the optimal normalΓτ=argmin L(normalΓτ) has been obtained from stochastic steepest descent, the modulo‐basin projected trajectory can be propagated beyond MD‐accessible timescales. At this stage, the coarse‐grained modulo basin trajectory needs to be decoded back to the MD‐level atomistic description.…”
Section: Theoretical Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…For the Goal-ANN, we have no target value, since we do not know the "correct" goal. We therefore optimize this networks by using neuroevolution, a technique employing a population of neural networks, having the ones performing their task best become further adapted and specialized to the problem [14].…”
Section: Combining Deep Learning and Neuroevolutionmentioning
confidence: 99%
“…This Goal-ANN is trained using neuroevolution [14]: A population of neural networks compete for their ability to produce relevant goals. The best networks are randomly changed, by adding or removing nodes and connections, while worse networks are discarded.…”
Section: Adaptive Goals Guiding Action Selectionmentioning
confidence: 99%