2018
DOI: 10.1051/cocv/2017019
|View full text |Cite
|
Sign up to set email alerts
|

Bellman equation and viscosity solutions for mean-field stochastic control problem

Abstract: We consider the stochastic optimal control problem of McKean-Vlasov stochastic differential equation where the coefficients may depend upon the joint law of the state and control. By using feedback controls, we reformulate the problem into a deterministic control problem with only the marginal distribution of the process as controlled state variable, and prove that dynamic programming principle holds in its general form. Then, by relying on the notion of differentiability with respect to probability measures r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
143
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 122 publications
(145 citation statements)
references
References 39 publications
2
143
0
Order By: Relevance
“…This leads to a characterization of the solution in terms of an adjoint backward stochastic differential equation (BSDE) coupled with a forward SDE, and we refer to [19] for a theory of BSDE of McKean-Vlasov type. Alternatively, dynamic programming approach for the control of McKean-Vlasov dynamics has been considered in [6], [7], [32] for specific McKean-Vlasov dynamics and under a density assumption on the probability law of the state process, and then analyzed in a general framework in [36] (without noise W 0 ), where the problem is reformulated into a deterministic control problem involving the marginal distribution process.…”
Section: Introductionmentioning
confidence: 99%
“…This leads to a characterization of the solution in terms of an adjoint backward stochastic differential equation (BSDE) coupled with a forward SDE, and we refer to [19] for a theory of BSDE of McKean-Vlasov type. Alternatively, dynamic programming approach for the control of McKean-Vlasov dynamics has been considered in [6], [7], [32] for specific McKean-Vlasov dynamics and under a density assumption on the probability law of the state process, and then analyzed in a general framework in [36] (without noise W 0 ), where the problem is reformulated into a deterministic control problem involving the marginal distribution process.…”
Section: Introductionmentioning
confidence: 99%
“…We end this section by mentioning that it would be possible to allow the coefficients in (2.1) and (2.3) to depend more generally on the joint law of the state and control. This is actually considered in the continuoustime version [14] of the McKean-Vlasov control problem.…”
Section: Mckean-vlasov Control Problemmentioning
confidence: 99%
“…The case of continuous time McKean-Vlasov equations requires more technicalities and mathematical tools, and will be addressed in [14]. The discrete time framework has been also considered in [9] for LQ problem, and arises naturally in situations where signal values are available only at certain times.…”
Section: Introductionmentioning
confidence: 99%
“…The link between these two approaches is discussed in [21]. Solutions of Hamilton-Jacobi PDEs in the space of probabilities in the framework of the extrinsic approach were studied in [13], [16], [20], [21], [28]. The intrinsic sub-and superdifferentials were also used for this class of equations (see [14], [15], [26]).…”
Section: Introductionmentioning
confidence: 99%