2023
DOI: 10.1016/j.ins.2023.02.033
|View full text |Cite
|
Sign up to set email alerts
|

A model-based deep reinforcement learning approach to the nonblocking coordination of modular supervisors of discrete event systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 44 publications
0
2
0
Order By: Relevance
“…The Q table for the transfer line example is approximated by a fully connected three-layer neural network, where the numbers of neurons of the input layer, the middle layer, and the output layer are 6, 8, and 8, respectively. The details of the DQN algorithm are elaborated in [46].…”
Section: ) Deep Reinforcement Learning Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…The Q table for the transfer line example is approximated by a fully connected three-layer neural network, where the numbers of neurons of the input layer, the middle layer, and the output layer are 6, 8, and 8, respectively. The details of the DQN algorithm are elaborated in [46].…”
Section: ) Deep Reinforcement Learning Frameworkmentioning
confidence: 99%
“…and the values of rewards are shown in Table 3. Other parameters for the DQN algorithm are the same as those in Table 3 in [46]. Fig.…”
Section: ) Deep Reinforcement Learning Frameworkmentioning
confidence: 99%