The control synthesis problem is one of the central problems in modern control theory. Its solution can be obtained in various classes of feedback controls. For example, in classical control theory under a geometric constraint, the desired control ranges in the set of extreme points of the constraint, so that the synthesized system is described by differential equations with discontinuous right-hand side [1]. Sometimes, control synthesis can be described with the use of "switching surfaces" dividing the phase space into domains with continuous controls, while discontinuities ("switchings") are allowed on these surfaces [2][3][4].However, the solutions can have impulsive character in many applied problems, for example, in aerospace systems with instantaneous motion corrections, in systems with communication constrains, and in logical-dynamical systems. Such solutions require controls of generalized type, which consist of impulsive "delta functions" or their combinations with a bounded control. First program solutions in the impulsive control problem were obtained in [5]. It was shown in [6] that, in a linear impulsive problem, the number of jumps of an optimal control does not exceed the dimension of the phase space.Such problems were considered mainly from the viewpoint of program controls [4,7,8], while the construction of a well-formalized theory of impulsive control synthesis still remains an open problem. In the present paper, we show that dynamic programming methods can be applied to impulsive control problems in which solutions are sought in the form of synthesizing strategies. We consider linear systems, which permits one to combine the classical theory of distributions with the theory of generalized (viscosity) solutions [9-11] of the corresponding quasi-variational inequalities [12] of Hamilton-Jacobi-Bellman type. The suggested approach also permits one to study problems with higher-order derivatives of delta functions [13].
THE PROBLEMConsider the minimization problem for a generalized Mayer-Bolza functional on the trajectories of an impulsive control system:Here x(t) ∈ R n is the phase vector, U (·) ∈ BV ([t 0 , t 1 ] ; R m ) is a generalized control, and BV ([t 0 , t 1 ] ; R m ) is the space of functions of bounded variation ranging in R m . We assume that A(t) ∈ R n×n and B(t) ∈ R n×m are continuous matrix functions. The terminal time t 1 is fixed. The terminal term ϕ : R n → R ∪ {∞} is a closed convex function, whose presence in the expression for J(u(·)) permits us to state the optimality principle in what follows.The special choice 1 ϕ(x) = I (x| {x 1 }) of the function ϕ leads to the well-known problem of bringing a system from a point x 0 at time t 0 to a point x 1 at time t 1 with minimum variation of the control: Var [t0,t1] U (·) → inf, dx(t) = A(t)x(t)dt + B(t)dU (t), t∈ [t 0 , t 1 ] ,x (t 0 − 0) = x 0 , x(t 1 + 0) = x 1 .(2) 1 Here I (x|A) stands for the indicator function of a set A. (It vanishes on A and is equal to +∞ outside A.)