2023
DOI: 10.3390/pr11072018
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Multi-Objective Optimization on Dynamic Flexible Job Shop Scheduling Using Deep Reinforcement Learning Approach

Abstract: Previous research focuses on approaches of deep reinforcement learning (DRL) to optimize diverse types of the single-objective dynamic flexible job shop scheduling problem (DFJSP), e.g., energy consumption, earliness and tardiness penalty and machine utilization rate, which gain many improvements in terms of objective metrics in comparison with metaheuristic algorithms such as GA (genetic algorithm) and dispatching rules such as MRT (most remaining time first). However, single-objective optimization in the job… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…Experiments demonstrate the superiority and stability of their approach in comparison to various combined rules, widely recognized scheduling rules, and conventional deep Q-learning algorithms. Wu et al [33] propose the structure of a dual-layer DDQN algorithm to solve the dynamic FJSS problem with new job arrivals, in order to optimize both the total delay time and the makespan.…”
Section: Introductionmentioning
confidence: 99%
“…Experiments demonstrate the superiority and stability of their approach in comparison to various combined rules, widely recognized scheduling rules, and conventional deep Q-learning algorithms. Wu et al [33] propose the structure of a dual-layer DDQN algorithm to solve the dynamic FJSS problem with new job arrivals, in order to optimize both the total delay time and the makespan.…”
Section: Introductionmentioning
confidence: 99%