2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) 2021
DOI: 10.1109/iceccme52200.2021.9590925
|View full text |Cite
|
Sign up to set email alerts
|

Utilizing Multi-Agent Deep Reinforcement Learning For Flexible Job Shop Scheduling Under Sustainable Viewpoints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…Considering that a production scheduling task may be conceptualized as the environment within the framework of RL, an agent can acquire a policy of well-designed actions and states, and engage the extensive offline training through interaction with the environment. This innovative concept offers a fresh perspective on addressing scheduling challenges, particularly those characterized by uncertainty and dynamism, and necessitating stringent real-time constraints, as in the case of in a dynamic job shop scheduling problem [22,[43][44][45][46][47].…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Considering that a production scheduling task may be conceptualized as the environment within the framework of RL, an agent can acquire a policy of well-designed actions and states, and engage the extensive offline training through interaction with the environment. This innovative concept offers a fresh perspective on addressing scheduling challenges, particularly those characterized by uncertainty and dynamism, and necessitating stringent real-time constraints, as in the case of in a dynamic job shop scheduling problem [22,[43][44][45][46][47].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Wang et al [37] discusses that the production scheduling process involves manufacturing of several types of items using a hybrid production pattern that utilizes Multi-Agent Deep Reinforcement Learning (MADRL) model. Popper et al [43] suggested that MADRL can be employed to optimize flexible production plants in a reactive manner, taking several criteria into account such as efficient and ecological target values. Du et al [61] utilized the DQN algorithm to address the flexible task shop scheduling problem (FJSP) in the presence of varying processing rates, setup time, idle time, and task transportation.…”
Section: Literature Reviewmentioning
confidence: 99%
“… Alignment of the MAS4AI Solution with the RAMI4.0 Architecture (based on DIN SPEC 91345 , 2016 ; Alexopoulos et al, 2020 ; Popper et al, 2021 ). …”
Section: Key Concepts For Human-centered Smart Manufacturingmentioning
confidence: 99%
“…Each machine is taken as a scheduler agent, which collects the scheduling states of all machines as input for training separately and executes the scheduling policy, respectively. Popper et al (2021) [32] proposed a distributed MARL scheduling method for the multiobjective optimization problem of minimizing energy consumption and delivery delay in the production process. The basic algorithm is solved by PPO, which regulates the joint behavior of each agent through a common reward function.…”
Section: Marl With Dtpmentioning
confidence: 99%
“…NMIR (normalized MIR), represented as Equation (32), reflects the deviation of the makespan solved under the dynamic environment from the best-known makespan.…”
Section: Generalization To Robot Breakdownsmentioning
confidence: 99%