2017
DOI: 10.1007/s00780-017-0327-5
|View full text |Cite
|
Sign up to set email alerts
|

On time-inconsistent stochastic control in continuous time

Abstract: In this paper, which is a continuation of the discrete-time paper (Björk and Murgoci in Finance Stoch. 18:545-592, 2004), we study a class of continuoustime stochastic control problems which, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We study these problems within a game-theoretic framework, and we look for Nash subgame perfect equilibrium points. For a general controlled continuous-time Markov process and a fairly general objective functional, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

6
384
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 295 publications
(390 citation statements)
references
References 25 publications
6
384
0
Order By: Relevance
“…This means that P t,µ T converges to µ in P 2 (R d ) when t ր T , uniformly in α ∈ A. Now, from the definition of v in (3.8), we have 8) from the growth condition on f in (H2'). By the continuity assumption on g together with the growth condition on g in (H2'), which allows to use dominated convergence theorem, we deduce from (5.7) thatĝ(P t,µ T ) converges toĝ(µ) when t ր T , uniformly in α ∈ A.…”
Section: Of Test Functions For the Lifted Bellman Equation As The Sementioning
confidence: 99%
See 1 more Smart Citation
“…This means that P t,µ T converges to µ in P 2 (R d ) when t ր T , uniformly in α ∈ A. Now, from the definition of v in (3.8), we have 8) from the growth condition on f in (H2'). By the continuity assumption on g together with the growth condition on g in (H2'), which allows to use dominated convergence theorem, we deduce from (5.7) thatĝ(P t,µ T ) converges toĝ(µ) when t ր T , uniformly in α ∈ A.…”
Section: Of Test Functions For the Lifted Bellman Equation As The Sementioning
confidence: 99%
“…It is claimed in [8] and [38] that Bellman optimality principle does not hold, and therefore the problem is time-inconsistent. This is correct when one takes into account only the state process X (that is its realization), since it is not Markovian, but as shown in this section, dynamic programming principle holds true whenever we consider the marginal distribution as state variable.…”
Section: Dynamic Programming Principlementioning
confidence: 99%
“…The last definition corresponds to the closed-loop or statefeedback type control. This can be characterized by solving the HJB equation obtained by discretizing the corresponding optimal control problem, which is closely related to the multiperson differential game [2], [3], [8], [9], [14]- [18], [24], [35], [36]. In addition, the mixed equilibrium solution concept was used in [37], and the Markovian framework for timeinconsistent linear-quadratic problems was developed in [38].…”
Section: Introductionmentioning
confidence: 99%
“…This type of systems of equations appear in time-inconsistent stochastic control problems and characterize their subgame perfect Nash equilibria. Time-inconsistent control problems are recently studied by Ekeland and Lazrak (2010) [6], Yong (2012) [20], Björk, Murgoci and Zhou (2014) [2], Zhou (2012, 2017) [8,9], Djehiche and Huang (2016) [5], Björk, Khapko and Murgoci (2017) [1], Wei, Yong and Yu (2017) [18], Ni, Zhang and Krstic (2018) [13], among others. Time-inconsistency occurs for example when a non-exponential discount rate is considered or when the cost functional is a nonlinear function of (conditional) expectation of a state process such as dynamic mean-variance control problems.…”
Section: Introductionmentioning
confidence: 99%