2013
DOI: 10.1007/s10164-013-0362-4
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic decision-making in uncertain environments I. The principle of dynamic utility

Abstract: Understanding the dynamics or sequences of animal behavior usually involves the application of either dynamic programming or stochastic control methodologies. A difficulty of dynamic programming lies in interpreting numerical output, whereas even relatively simple models of stochastic control are notoriously difficult to solve. Here we develop the theory of dynamic decisionmaking under probabilistic conditions and risks, assuming individual growth rates of body size are expressed as a simple stochastic process… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7

Relationship

4
3

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 29 publications
0
16
0
Order By: Relevance
“…An important aspect of our proposed solution is that expected utility theory is a static model (Yoshimura et al 2012). This implies that game theory and the well-known concept of the evolutionarily stable strategy (ESS) are only valid in the context of decisions arrived at singly.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…An important aspect of our proposed solution is that expected utility theory is a static model (Yoshimura et al 2012). This implies that game theory and the well-known concept of the evolutionarily stable strategy (ESS) are only valid in the context of decisions arrived at singly.…”
Section: Discussionmentioning
confidence: 99%
“…Because the traditional expected utility theory is static, its previous applications to animal and human behavior lack the optimality criterion (Friedman and Savage 1952;Caraco 1980;Real and Caraco 1986;Yoshimura and Shields 1987). We here develop the optimality criterion for behavioral decisions in both animals and humans, and derive the utility (fitness) surface, u(g;w) (see also Yoshimura et al 2012).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Because dynamic utility theory optimizes Markov chains (stochastic processes) as a form of sequential decision making, it maximizes the geometric mean of multiplicative growth rates [ 20 ]. The DUF is derived as follows [ 21 , 22 ]. Let time t = 0, … , T (final time), and let w t and r t represent wealth and the growth rate, respectively, at time t .…”
Section: Introductionmentioning
confidence: 99%
“…Much previous work has indicated that “risk‐averse” behaviors are common in nature and theory (Stephens et al, ; Stephens & Krebs, ; Yoshimura, Ito, Miller III, & Tainaka, ; Zhang, Brennan, & Lo, ). In our previous model of risk‐sensitive foraging, we found a preference for a “risk‐prone” strategy when the foraging time is longer than optimal foraging time that maximizes the arithmetic mean fitness ( x A * ) (Ito, Uehara, Morita, Tainaka, & Yoshimura, ).…”
Section: Introductionmentioning
confidence: 99%