2023
DOI: 10.1007/s00521-023-08877-3
|View full text |Cite
|
Sign up to set email alerts
|

Double deep Q-network-based self-adaptive scheduling approach for smart shop floor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Other popular areas of proposed DRL applications are order selection and scheduling, in particular, dynamic scheduling [19,25]. In [31], a self-adaptive scheduling approach based on DDQN is proposed. To validate the effectiveness of the self-adaptive scheduling approach, the simulation model of a semiconductor production demonstration unit was used.…”
Section: Deep Reinforcement Learning In Production Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…Other popular areas of proposed DRL applications are order selection and scheduling, in particular, dynamic scheduling [19,25]. In [31], a self-adaptive scheduling approach based on DDQN is proposed. To validate the effectiveness of the self-adaptive scheduling approach, the simulation model of a semiconductor production demonstration unit was used.…”
Section: Deep Reinforcement Learning In Production Systemsmentioning
confidence: 99%
“…This gap motivated research in the area of using DTs of production systems based on modern computer discrete simulation systems as an environment for the interaction of reinforced machine learning agents, the results of which are presented in this paper. Additionally, many studies provide the parameters of the RL algorithms used (e.g., [31]). Still, there is no analysis or discussion of the impact of their values on the results obtained or the speed of the training process.…”
Section: Introductionmentioning
confidence: 99%