2012
DOI: 10.1007/s13676-012-0015-8
|View full text |Cite
|
Sign up to set email alerts
|

Approximate dynamic programming in transportation and logistics: a unified framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
43
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 72 publications
(43 citation statements)
references
References 49 publications
0
43
0
Order By: Relevance
“…More information on different types of policies, and ADP modeling in general, can be found in Powell (2012), Powell and Ryzhov (2013), Powell (2014) and, using examples from transportation and logistics, Powell et al (2012). In these works, also the relationship between (approximate) dynamic programming and other techniques as stochastic programming, simulation, and stochastic search, is discussed.…”
Section: Policiesmentioning
confidence: 99%
“…More information on different types of policies, and ADP modeling in general, can be found in Powell (2012), Powell and Ryzhov (2013), Powell (2014) and, using examples from transportation and logistics, Powell et al (2012). In these works, also the relationship between (approximate) dynamic programming and other techniques as stochastic programming, simulation, and stochastic search, is discussed.…”
Section: Policiesmentioning
confidence: 99%
“…24,25 To handle the existed uncertainties in a mathematical way, the post-decision state variable is introduced to represent the state of the system after we have made a decision but before any exogenous information has arrived. Exogenous information referring to the sources of uncertain factors can be viewed as information that becomes available over time under practical circumstance.…”
Section: Proposed Approachmentioning
confidence: 99%
“…According to [Powell et al, 2012]: "Stochastic optimization problems arise in many settings, and as a result, a wide range of algorithmic strategies have evolved from communities with names such as Markov decision processes, stochastic programming, stochastic search, simulation optimization, reinforcement learning, approximate dynamic programming and optimal control." These authors classify also the policies in four classes wherein each policy approximates the solution for the current period by adapting Bellman's equation (e.g.…”
Section: Expected Cost Minimization Approachesmentioning
confidence: 99%
“…Of course, hybrid methods might be set up as mentioned in [Powell et al, 2012]. Within the methodology presented in section 4.2, several classes of policies are used: myopic ones for bounds, policy function approximation for some specific scenarios and lookahead policies including deterministic information trying to approximate the value of a decision over the following periods.…”
mentioning
confidence: 99%