2013
DOI: 10.1007/s10845-013-0852-9
|View full text |Cite
|
Sign up to set email alerts
|

A reinforcement learning based approach for a multiple-load carrier scheduling problem

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(8 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…Firstly, RL is applicable to sequential decision problems, for instance, using Markov Decision Processes (MDP). Secondly, RL's applicability to related adaptive control problems in manufacturing systems has already been shown (Chen et al 2015;Stricker et al 2018;Usuga Cadavid et al 2020). Especially Deep Reinforcement Learning's (DRL) success in the gaming industry (Mnih et al 2013;Silver et al 2016) proved the method's generalizability and its solution quality, being applied to complex, dynamic optimization problems.…”
Section: Earlier Work Leads To Reinforcement Learning Closing the Research Gapmentioning
confidence: 99%
“…Firstly, RL is applicable to sequential decision problems, for instance, using Markov Decision Processes (MDP). Secondly, RL's applicability to related adaptive control problems in manufacturing systems has already been shown (Chen et al 2015;Stricker et al 2018;Usuga Cadavid et al 2020). Especially Deep Reinforcement Learning's (DRL) success in the gaming industry (Mnih et al 2013;Silver et al 2016) proved the method's generalizability and its solution quality, being applied to complex, dynamic optimization problems.…”
Section: Earlier Work Leads To Reinforcement Learning Closing the Research Gapmentioning
confidence: 99%
“…In Ref. [65], an RL-based method is proposed for dispatching material handling dolly trains in a general assembly line, wherein the dolly train delivers materials to workstations and carries multiple types of parts at a time. In Ref.…”
Section: Resource Allocationmentioning
confidence: 99%
“…[66] and product queue lengths in Ref. [65], drive the transition of the system states, which are difficult to obtain the complete state transition models. RL fits such sequential decision-making problems well and can solve them in a modelfree way with various algorithms.…”
Section: Resource Allocationmentioning
confidence: 99%
“…These techniques are very useful in complex problems whose space are not completely available or predictable. Machine learning techniques 5 fall into three main categories, that are supervised learning [1], unsupervised learning [2], and reinforcement learning [3]. In the case of supervised learning techniques, there is a set of training data for which the solutions are available and the learner tries to infer a function from the training data to map unseen data with high accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…Reinforcement signals are the only information the agent receives about its acting, and reinforcement techniques are about learning optimal acting in an environment using the reinforcement signals. Reinforcement learning is applicable to problems in various fields of study such as prediction [4], scheduling [5], wireless networks [6,7], robotics [8], ensemble learning [9] to mention a few. 20 The variety of applications and the concept of learning from experience, give rise to the study and design of reinforcement learning techniques for complex and large-scale problems.…”
Section: Introductionmentioning
confidence: 99%