2022
DOI: 10.3390/electronics11040539
|View full text |Cite
|
Sign up to set email alerts
|

A Parallel Deep Reinforcement Learning Framework for Controlling Industrial Assembly Lines

Abstract: Decision-making in a complex, dynamic, interconnected, and data-intensive industrial environment can be improved with the assistance of machine-learning techniques. In this work, a complex instance of industrial assembly line control is formalized and a parallel deep reinforcement learning approach is presented. We consider an assembly line control problem in which a set of tasks (e.g., vehicle assembly tasks) needs to be planned and controlled during their execution, with the aim of optimizing given key perfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…The trial-and-error learning through the interaction with the environment and not requiring pre-collected data and prior expert knowledge allows RL algorithms to adapt to uncertain conditions, which is also discussed by Panzer and Bender (2022). Some applications can be found in manufacturing, for instance, in scheduling tasks as an example demonstrated by Dong, Xue, Xiao and Li (2020), maintenance as a case study researched by Rodríguez, Kubler, de Giorgio, Cordy, Robert and Le Traon (2022); Yousefi, Tsianikas and Coit (2022), process control described by the authors Spielberg, Tulsyan, Lawrence, Loewen and Gopaluni (2020), energy management example elaborated by Lu, Li, Li, Jiang and Ding (2020), assembly task mentioned by Tortorelli, Imran, Delli Priscoli and Liberati (2022), and robot manipulation that in detail has been discussed by Beltran-Hernandez, Petit, Ramirez-Alpizar and Harada (2020); Schoettler, Nair, Luo, Bahl, Ojea, Solowjow and Levine (2020).…”
Section: Related Work and Contributionmentioning
confidence: 99%
“…The trial-and-error learning through the interaction with the environment and not requiring pre-collected data and prior expert knowledge allows RL algorithms to adapt to uncertain conditions, which is also discussed by Panzer and Bender (2022). Some applications can be found in manufacturing, for instance, in scheduling tasks as an example demonstrated by Dong, Xue, Xiao and Li (2020), maintenance as a case study researched by Rodríguez, Kubler, de Giorgio, Cordy, Robert and Le Traon (2022); Yousefi, Tsianikas and Coit (2022), process control described by the authors Spielberg, Tulsyan, Lawrence, Loewen and Gopaluni (2020), energy management example elaborated by Lu, Li, Li, Jiang and Ding (2020), assembly task mentioned by Tortorelli, Imran, Delli Priscoli and Liberati (2022), and robot manipulation that in detail has been discussed by Beltran-Hernandez, Petit, Ramirez-Alpizar and Harada (2020); Schoettler, Nair, Luo, Bahl, Ojea, Solowjow and Levine (2020).…”
Section: Related Work and Contributionmentioning
confidence: 99%
“…Finally, the optimal control of industrial assembly lines was studied in Reference [10]. In this work, a complex instance of industrial assembly line control is formalized and a parallel deep reinforcement learning approach is presented.…”
Section: The Present Issuementioning
confidence: 99%
“…Finally, the functions of condition monitoring and fault diagnosis are further expanded in recent years. It is no longer limited to assisting in the operation and maintenance of equipment, but extends to the optimization of complex workflows, and even directly participates in the operation of machines, providing real-time and effective guidance for their control operations [9,10], thus greatly improving the intelligence and efficiency of machine operations. However, the premise of this is that the results of condition monitoring and fault diagnosis must be correct and reliable.…”
Section: Futurementioning
confidence: 99%