2020
DOI: 10.1109/access.2020.3000781
|View full text |Cite
|
Sign up to set email alerts
|

A Reinforcement Learning Approach to Dynamic Scheduling in a Product-Mix Flexibility Environment

Abstract: Machine bottlenecks, resulting from shifting and unbalanced machine loads caused by resource capacity limitations, impair product-mix flexibility production systems. Thus, the knowledge base (KB) of a dynamic scheduling control system should be dynamic and include a knowledge revision mechanism for monitoring crucial changes that occur in the production system. In this paper, reinforcement learning (RL)-based dynamic scheduling and a selection mechanism for multiple dynamic scheduling rules (MDSRs) are propose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 71 publications
(95 reference statements)
0
8
0
Order By: Relevance
“…The authors show that based on the observed system workload and queues, performance can be improved by dynamically selecting sequencing rules. Similarly, other authors show that dynamic adaptation of machine-specific rules enables significant performance improvements across different scenarios (Shiue et al 2018(Shiue et al , 2020.…”
Section: = ( )mentioning
confidence: 70%
“…The authors show that based on the observed system workload and queues, performance can be improved by dynamically selecting sequencing rules. Similarly, other authors show that dynamic adaptation of machine-specific rules enables significant performance improvements across different scenarios (Shiue et al 2018(Shiue et al , 2020.…”
Section: = ( )mentioning
confidence: 70%
“…Chan et al used a set of features to represent shop situations and applied a machine learning model to predict the best dispatching rules in multiple online simulation runs [184]. Similarly, Shiue et al used the reinforcement learning method to dynamically select the appropriate dispatching rules based on the current shop information [185]. The other method is simulation optimization, which combines optimization procedures elaborately within the simulation routine [186].…”
Section: Stochastic and Dynamic Schedulingmentioning
confidence: 99%
“…throughput, cycle time, on-time delivery rate, movement dynamic dispatching rule [181] wafer fabrication dynamic job arrivals, load balancing throughput, cycle time SOM-based multi-rules selection method [182] wafer fabrication r j , p-batch, aux throughput, utilization extreme learning stochastic machine, multiple dispatching rules [183] assembly r j , s lk , aux, recrc machine utilization case-based reasoning, GA [184] wafer fabrication p-batch, recrc throughput, cycle time machine learning, simulation, dispatching rules [185] wafer fabrication p-batch, aux, recrc cycle time reinforcement learning [187] wafer fabrication r j , lot processing, p-batch, AMHS average delay, average WIP, average cycle time simulation optimization, GA, multiple dispatching rules [188] wafer fabrication r j , recrc average cycle time adaptive simulation-based optimization, GA [189] non-specific r j , s lk , etc. cycle time simulation optimization, genetic programming [190] non-specific r j , s lk , etc.…”
Section: Referencesmentioning
confidence: 99%
“…Shiue, Lee and Su [ 39 ] studied a RL-based real-time scheduling problem using multiple dispatching rules strategy to respond to changes in a manufacturing system. Shiue et al [ 51 ] studied the dynamic scheduling of a flexible manufacturing system and semiconductor wafer fabrication using RL. Zhang et al [ 52 ] studied the scheduling of unreliable parallel machines to minimize mean weighted tardiness using RL.…”
Section: Introductionmentioning
confidence: 99%