2021
DOI: 10.1016/j.procir.2020.05.210
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic scheduling in a job-shop production system with reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(15 citation statements)
references
References 19 publications
0
15
0
Order By: Relevance
“…Shahrabi et al 37 used RL to improve the scheduling performance for dynamic job-shop scheduling problems, considering random job arrivals and machine failures. Kardos et al 38 designed a scheduling algorithm based on Q-learning to solve the dynamic job-shop scheduling problem for reducing the average lead-time of production orders. However, it is difficult for basic Q-learning algorithms to adapt to problems with a large action space.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Shahrabi et al 37 used RL to improve the scheduling performance for dynamic job-shop scheduling problems, considering random job arrivals and machine failures. Kardos et al 38 designed a scheduling algorithm based on Q-learning to solve the dynamic job-shop scheduling problem for reducing the average lead-time of production orders. However, it is difficult for basic Q-learning algorithms to adapt to problems with a large action space.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The simulation results showed that the performance of the designed algorithm was better than the existing rules. Combining the learning and optimization, Martínez et al [61] proposed a two-stage method to solve the flexible job shop scheduling problem. In the first stage, Q-learning was used to realize machine assignment and job scheduling and the feasible solution can be generated.…”
Section: Rl For Job Shop Schedulingmentioning
confidence: 99%
“…The reward functions as well as the action-space are shaped by hand according to the scenario. Kardos et al (2021) used Q-learning to select a service provider for the next operation in a small scenario with only a few steps. The approach presented by May et al (2021) uses multiple agents for the routing and the scheduling.…”
Section: Reinforcement Learningmentioning
confidence: 99%