2010
DOI: 10.1002/nav.20384
|View full text |Cite
|
Sign up to set email alerts
|

Strategic capacity decision‐making in a stochastic manufacturing environment using real‐time approximate dynamic programming

Abstract: Abstract:In this study, we illustrate a real-time approximate dynamic programming (RTADP) method for solving multistage capacity decision problems in a stochastic manufacturing environment, by using an exemplary three-stage manufacturing system with recycle. The system is a moderate size queuing network, which experiences stochastic variations in demand and product yield. The dynamic capacity decision problem is formulated as a Markov decision process (MDP). The proposed RTADP method starts with a set of heuri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Atamtürk and Zhang , Mudchanatongsuk et al , Ordóñez and Zhao , and Karoonsoontawong are examples of addressing arc capacity expansion problem with robust optimization approach. Another technique dealing with stochastic capacity expansion is Markov Decision Processes as illustrated in Bean et al , Bhatnagar et al , and Pratikakis . However, to the best of our knowledge, there is no such work in stochastic network capacity expansion problems.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Atamtürk and Zhang , Mudchanatongsuk et al , Ordóñez and Zhao , and Karoonsoontawong are examples of addressing arc capacity expansion problem with robust optimization approach. Another technique dealing with stochastic capacity expansion is Markov Decision Processes as illustrated in Bean et al , Bhatnagar et al , and Pratikakis . However, to the best of our knowledge, there is no such work in stochastic network capacity expansion problems.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Global parametric function approximation can be realized with ANNs [5] or Evolutionary Value Function Approximations [15], combining Reinforcement Learning with Genetic Algorithms. In contrast, local nonparametric function approximation is represented by instance-based algorithms in terms of classical value tables with k-NN prediction [13,16] and kernel regression [5].…”
Section: Related Workmentioning
confidence: 99%
“…ADP has been successfully applied to process control problems [13,18,16] using time-independent value functions that are determined by Approximate Value Iteration. According to [13], local averagers such as k-NN predictors should be preferred over global approximators in the context of Approximate Value Iteration due to convergence issues.…”
Section: Related Workmentioning
confidence: 99%
“…(Pratikakis et al 2006) developed a real-time adaptive dynamic programming method for planning and scheduling. (Pratikakis et al 2010) studied Approximate Dynamic Programming (ADP) method to address the curse-of-dimensionality associated with multistage capacity decision problems. More recently, (Lin et al 2014) developed a Stochastic Dynamic Programming (SDP) approach for multi-site capacity planning in thin-film transistor liquid crystal display (TFT-LCD) manufacturing considering demand uncertainty.…”
Section: Introductionmentioning
confidence: 99%