2021
DOI: 10.1016/j.nucengdes.2020.110966
|View full text |Cite
|
Sign up to set email alerts
|

Physics-informed reinforcement learning optimization of nuclear assembly design

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
3

Relationship

3
6

Authors

Journals

citations
Cited by 48 publications
(13 citation statements)
references
References 34 publications
0
13
0
Order By: Relevance
“…This randomness and usage of multiple algorithms ensures independence, flexibility, and better coverage of the search space. NEORL (NeuroEvolution Optimization with Reinforcement Learning) is a set of implementations of hybrid algorithms combining neural networks and evolutionary computation based on a wide range of machine learning and evolutionary intelligence architectures [26,27]. NEORL has been developed by one of the authors of this current study, and it offers a robust implementation for ES, GWO, MFO, and DE, and so we utilize it in this work.…”
Section: Evolutionary Optimizationmentioning
confidence: 99%
“…This randomness and usage of multiple algorithms ensures independence, flexibility, and better coverage of the search space. NEORL (NeuroEvolution Optimization with Reinforcement Learning) is a set of implementations of hybrid algorithms combining neural networks and evolutionary computation based on a wide range of machine learning and evolutionary intelligence architectures [26,27]. NEORL has been developed by one of the authors of this current study, and it offers a robust implementation for ES, GWO, MFO, and DE, and so we utilize it in this work.…”
Section: Evolutionary Optimizationmentioning
confidence: 99%
“…Currently, to preserve a generic implementation, the user can use variety of priori methods to convert multiobjective to single objective problem. Indeed, NEORL demonstrated successful results with complex multiobjective optimization problems, which are illustrated here [34,55,44], where -constrained and linear scalarization in conjunction with different neural, evolutionary, and neuroevolution algorithms have been effective in solving the multiobjective optimization.…”
Section: Multiobjective Optimization and Constrained Handlingmentioning
confidence: 99%
“…This inspiration has led to the term "learning to optimize" [31], which has been followed by many attempts to use RL and neural networks to solve optimization problems. See for example [32] on using RL with recurrent neural network policy for combinatorial optimization, or [33] for using graph embedding and RL to solve combinatorial optimization over graphs, or [34] for using deep Q learning and proximal policy optimization for physics-informed optimization of nuclear fuel. A comprehensive overview of using machine learning and neural networks to solve combinatorial optimization is conducted by [35].…”
Section: Introductionmentioning
confidence: 99%
“…This motivates creation of fast running accurate surrogate models for fuel performance analysis to achieve a tighter coupling with current core simulators. Particularly, we plan to integrate the surrogate model in a recent physics-informed reinforcement learning (RL) optimization framework [3] [4]. The current constraints and objectives employed in this physics-informed RL framework accommodates only coupled neutronics-thermal-hydraulics response from a commercial licensed code.…”
Section: Introductionmentioning
confidence: 99%