2022
DOI: 10.2139/ssrn.4305368
|View full text |Cite
|
Sign up to set email alerts
|

Neorl: Neuroevolution Optimization with Reinforcement Learning - Applications to Carbon-Free Energy Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…We use PSO, DE, and BO to call the numerical simulation module iteratively to maximize the FOM. The DE and PSO algorithm is taken from the open-source code NEORL [53] (https: //neorl.readthedocs.io/en/latest/). The BO algorithm is from the open-source code Bayesian Optimization [54] (https://github.…”
Section: Benchmarking Of the Algorithmmentioning
confidence: 99%
“…We use PSO, DE, and BO to call the numerical simulation module iteratively to maximize the FOM. The DE and PSO algorithm is taken from the open-source code NEORL [53] (https: //neorl.readthedocs.io/en/latest/). The BO algorithm is from the open-source code Bayesian Optimization [54] (https://github.…”
Section: Benchmarking Of the Algorithmmentioning
confidence: 99%