We consider a stochastic control problem where the set of controls is not necessarily convex and the system is governed by a nonlinear backward stochastic differential equation. We establish necessary as well as sufficient conditions of optimality for two models. The first concerns the strict (classical) controls. The second is an extension of the first to relaxed controls, who are a measure valued processes.where b is given function, ξ is the terminal data and W = (W t ) t≥0 is a standard d-dimensional Brownian motion, defined on a filtered probability space Ω, F , (F t ) t≥0 , P satisfying the usual conditions. The control variable v = (v t ), called strict (classical) control, is an F t -adapted process with values in some set U of R k . We denote by U the class of all strict controls.The criteria to be minimized, over the set U, has the formLemma 13 (Ekeland's variational principle). Let (E, d) be a complete metric space and f : E −→ R be lower-semicontinuous and bounded from below. Given ε > 0, suppose u ε ∈ E satisfies f (u ε ) ≤ inf (f ) + ε. Then for any λ > 0, there exists v ∈ E such that
We consider in this paper, mixed relaxed-singular stochastic control problems, where the control variable has two components, the first being measure-valued and the second singular. The control domain is not necessarily convex and the system is governed by a nonlinear stochastic differential equation, in which the measure-valued part of the control enters both the drift and the diffusion coefficients. We establish necessary optimality conditions, of the Pontryagin maximum principle type, satisfied by an optimal relaxed-singular control, which exist under general conditions on the coefficients. The proof is based on the strict singular stochastic maximum principle established by Bahlali-Mezerdi, Ekeland's variational principle and some stability properties of the trajectories and adjoint processes with respect to the control variable.(SDE for short) of the typewhere b, σ and G are given deterministic functions, x 0 is the initial data and W = (W t ) t≥0 is a d-dimensional standard Brownian motion, defined on a filtered probability space Ω, F , (F t ) t≥0 , P , satisfying the usual conditions. Theand η is an increasing process (componentwise), continuous on the left with limits on the right with η 0 = 0. The pair (q, η) is called mixed relaxed-singular control (relaxed control for short) and we denote by R the class of relaxed controls.The functional cost, to be minimized over R, has the form. Singular control problems have been studied by many authors includingBenȇs-Shepp-Witsenhausen [5] , Chow-Menaldi-Robin [8] , Karatzas-Shreve [18] , Davis-Norman [9] , Haussmann-Suo [14,15,16] . See [15] for a complete list of references on the subject. The approaches used in these papers, to solve the problem are mainly based on dynamic programming. It was shown in particular that the value function is solution of a variational inequality, and the optimal state is a reflected diffusion at the free boundary. Note that in [14] , the authors apply the compactification method to show existence of an optimal relaxed-singular control.The other major approach to solve control problems is to derive necessary conditions satisfied by some optimal control, known as the stochastic maximum principle. The first version of the stochastic maximum principle that covers singular control problems was obtained by , in which they consider linear dynamics, convex cost criterion and convex state constraints. necessary optimality conditions for non linear SDEs were obtained by Bahlali-Chala [1] and Bahlali-Mezerdi [2] .The common fact in this works is that an optimal strict singular control does not necessarily exist, the set U of strict singular controls (v, η), where v : [0, T ] × Ω −→ A 1 ⊂ R k , is too narrow and not being equipped with a good topological structure. The idea is then to introduce the class R of relaxed controls in which the controller chooses at time t, a probability measure q t (da) on the set A 1 , rather than an element v t of A 1 . The relaxed control problem find its interest in two essential points. The first is that it is ...
In this paper, we introduce and study the optimality conditions for stochastic control problems of nonlinear backward doubly stochastic differential equations. Necessary and sufficient optimality conditions, where the control domain is convex and the coefficients depend explicitly on the variable control, are proved. The results are stated in the form of weak stochastic maximum principle, and under additional hypotheses, we give these results in the global form. This is the first version of the stochastic maximum principle that covers the backward-doubly systems.
We consider a stochastic control problem where the set of strict (classical) controls is not necessarily convex, and the system is governed by a nonlinear backward stochastic differential equation. By introducing a new approach, we establish necessary as well as sufficient conditions of optimality for two models. The first concerns the relaxed controls, who are measure-valued processes. The second is a particular case of the first and relates to strict control problems.The criteria to be minimized, over the set U, has the formwhere g and h are given functions and (y v t , z v t ) is the trajectory of the system controlled by v.A control u ∈ U is called optimal if it satisfiesStochastic control problems for backward and forward-backward systems have been studied by many authors including Peng [27], Xu [31], El-Karoui et al [12, 13], Wu [30], Dokuchaev and Zhou [9], Peng and Wu [28], Bahlali and Labed [2], Bahlali [5, 6], Shi and Wu [29], Ji and Zhou [19]. The dynamic programming approach was studied by Fuhrman and Tessitore [16] .Since the strict control domain being nonconvex, then if we use the classical method of spike variation on strict controls, the major difficulty in doing this is that the generator b and the running cost coefficient h depend on two variables y t and z t . Then, we can't derive directly the variational inequality, because z t is hard to handle, there is no convenient pointwise (in t) estimation for it, as opposed to the first variable y t . To overcome this difficulty, we introduce a new approach which consist to use a bigger new class R of processes by replacing the U -valued process (v t ) by a P (U )-valued process (q t ), where P (U ) is the space of probability measures on U equipped with the topology of stable convergence. This new class of processes is called relaxed controls and have a richer structure of compacity and convexity. This property of convexity of relaxed controls, enables us to treat the problem with the way of convex perturbation on relaxed controls.In the relaxed model, the system is governed by the BSDEThe functional cost to be minimized, over the class R of relaxed controls, is defined byA relaxed control µ is called optimal if it solvesThe relaxed control problem is a generalization of the problem of strict controls. Indeed, if q t (da) = δ vt (da) is a Dirac measure concentrated at aProof. Let u ∈ U and µ ∈ R be respectively a strict and relaxed controls such thatBy (39), we have J (µ) ≤ J (q) , ∀q ∈ R.Since δ (U) ⊂ R, thenHence,The control u becomes an element of U, then we getOn the other hand, by (40) we havedtµ t (da) stably, P − a.s.By (42), we get thenBy using (35) and letting n go to infinity in the above inequality, we getFinally, by (41) and (43), we haveThe lemma is proved.To establish necessary optimality conditions for strict controls, we need the following lemma Lemma 16 The strict control u minimizes J over U if and only if the relaxed control µ = δ u minimizes J over R. Proof. Suppose that u minimizes the cost J over U, then J (u) = inf v∈U J (v) ....
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.