2013
DOI: 10.1111/2041-210x.12082
|View full text |Cite
|
Sign up to set email alerts
|

Complex decisions made simple: a primer on stochastic dynamic programming

Abstract: Summary1. Under increasing environmental and financial constraints, ecologists are faced with making decisions about dynamic and uncertain biological systems. To do so, stochastic dynamic programming (SDP) is the most relevant tool for determining an optimal sequence of decisions over time.2. Despite an increasing number of applications in ecology, SDP still suffers from a lack of widespread understanding. The required mathematical and programming knowledge as well as the absence of introductory material provi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
133
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 122 publications
(136 citation statements)
references
References 66 publications
0
133
0
Order By: Relevance
“…In a similar fashion, one may solve infinite horizon problems using the method of value iteration, which is analogous to backwards induction applied repeatedly from a zero terminal rewards function ϕ ( x ) = 0 for all x , until some convergence criterion for f ( x , t ) is reached (see Marescot et al, for an overview). We compare results obtained using these numerical methods with the proposed matrix methods.…”
Section: Methodsmentioning
confidence: 99%
“…In a similar fashion, one may solve infinite horizon problems using the method of value iteration, which is analogous to backwards induction applied repeatedly from a zero terminal rewards function ϕ ( x ) = 0 for all x , until some convergence criterion for f ( x , t ) is reached (see Marescot et al, for an overview). We compare results obtained using these numerical methods with the proposed matrix methods.…”
Section: Methodsmentioning
confidence: 99%
“…SDP can be applied where sequential management decisions are made to stochastic systems with a finite number of states (Bellman , Lubow , Mangel and Clark , Marescot et al. ). After discretizing the system into states, the first step of an SDP is to define a management objective.…”
Section: Methodsmentioning
confidence: 99%
“…We showed that SDM can be used to identify appropriate target harvest rates in the absence of information needed to determine optimal state-dependent decisions (e.g., Martin et al 2009, Marescot et al 2013 or employ robust statedependent harvest control rules (e.g., Punt 2006, Deroba and Bence 2008, Hilborn 2012. We showed that SDM can be used to identify appropriate target harvest rates in the absence of information needed to determine optimal state-dependent decisions (e.g., Martin et al 2009, Marescot et al 2013 or employ robust statedependent harvest control rules (e.g., Punt 2006, Deroba and Bence 2008, Hilborn 2012.…”
Section: Identifying Reference Points For Assessment-limited Populationsmentioning
confidence: 99%
“…AHM employs dynamic decision analyses to identify optimal harvest strategies recurrently over time, and reduces uncertainty through targeted monitoring programs that provide rigorous estimates of important state variables (e.g., abundance) and the responses of those variables to management (Nichols et al 1995, Johnson et al 1997. Proponents of AHM emphasize use of dynamic optimization methods (e.g., Lubow 1996, Marescot et al 2013 to identify optimal state-dependent policies at regular intervals over time as a function of population abundance and environmental conditions (Johnson et al 1997, Martin et al 2009. Although theoretically optimal, implementation of these methods presupposes a formal monitoring and assessment program is in place to estimate abundance or provide reliable, unbiased indices of abundance at regular intervals so that optimal policies can be updated over time.…”
Section: Introductionmentioning
confidence: 99%