2022
DOI: 10.1016/j.aei.2021.101512
|View full text |Cite
|
Sign up to set email alerts
|

Graph-based reinforcement learning for discrete cross-section optimization of planar steel frames

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…Otherwise, the global effort for the RL agent is significantly larger compared to the conventional optimizer. Compared to the approaches used in the reviewed literature concerning RL for structural optimization, the agent was directly calling the FE-model (Hayashi and Ohsaki, 2020;Brown et al, 2022;Hayashi and Ohsaki, 2022;. This works for small academic models, but not for industry standard models as used for occupant safety simulations in modern vehicle development.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Otherwise, the global effort for the RL agent is significantly larger compared to the conventional optimizer. Compared to the approaches used in the reviewed literature concerning RL for structural optimization, the agent was directly calling the FE-model (Hayashi and Ohsaki, 2020;Brown et al, 2022;Hayashi and Ohsaki, 2022;. This works for small academic models, but not for industry standard models as used for occupant safety simulations in modern vehicle development.…”
Section: Discussionmentioning
confidence: 99%
“…Notable is that the training is computationally expensive, but only needs to be done once. In further research, Hayashi and Ohsaki (2022) applied the technique to the optimization of planar steel frames under short-and long-term elastic load conditions, whereby the agent aims to minimize the structural volume under several practical constraints. In each action, the agent specifies each member by choosing it from a prescribed list.…”
Section: Optimization Of Mechanical Systems With Reinforcement Learningmentioning
confidence: 99%
“…To stabilize the training process, we further introduce mini-batch training, in which trainable parameters are updated using multiple samples at the same time [5]. During the sampling phase, prioritized experience replay [8] is further employed to preferentially use samples that are highly unexpected to the agent.…”
Section: Loss Minimization For Trainingmentioning
confidence: 99%
“…RL is a method to learn a sequential decision-making process to maximize rewards through a huge number of simulations. Although RL has been reported to perform well in tasks that are difficult to control by rule-based programming [3] and in solving optimization problems [4], RL has rarely been applied to the field of skeletal structures because of its complex connectivity [5].…”
Section: Introductionmentioning
confidence: 99%