Abstract. Given a compact parameter set Y ⊂ R p , we consider polynomial optimization problems (Py) on R n whose description depends on the parameter y ∈ Y. We assume that one can compute all moments of some probability measure ϕ on Y, absolutely continuous with respect to the Lebesgue measure (e.g. Y is a box or a simplex and ϕ is uniformly distributed). We then provide a hierarchy of semidefinite relaxations whose associated sequence of optimal solutions converges to the moment vector of a probability measure that encodes all information about all global optimal solutions x * (y) of Py, as y ∈ Y. In particular, one may approximate as closely as desired any polynomial functional of the optimal solutions, like e.g. their ϕ-mean. In addition, using this knowledge on moments, the measurable function y → x * k (y) of the k-th coordinate of optimal solutions, can be estimated, e.g. by maximum entropy methods. Also, for a boolean variable x k , one may approximate as closely as desired its persistency ϕ({y : x * k (y) = 1}, i.e. the probability that in an optimal solution x * (y), the coordinate x * k (y) takes the value 1. At last but not least, from an optimal solution of the dual semidefinite relaxations, one provides a sequence of polynomial (resp. piecewise polynomial) lower approximations with L 1 (ϕ) (resp. almost uniform) convergence to the optimal value function.
IntroductionRoughly speaking, given a set parameters Y and an optimization problem whose description depends on y ∈ Y (call it P y ), parametric optimization is concerned with the behavior and properties of the optimal value as well as primal (and possibly dual) optimal solutions of P y , when y varies in Y. This a quite challenging problem and in general one may obtain information locally around some nominal value y 0 of the parameter. There is a vast and rich literature on the topic and for a detailed treatment, the interested reader is referred to e.g. Bonnans and Shapiro [4] and the many references therein. Sometimes, in the context of optimization with data uncertainty, some probability distribution ϕ on the parameter set Y is available and in this context one is also interested in e.g. the distribution of the optimal value, optimal solutions, all viewed as random variables. In particular, for discrete optimization problems where cost coefficients are random variables with joint distribution ϕ, some bounds on the expected optimal value have been obtained. More recently Natarajan et al. [17] extended the earlier work in [3] to even 1991 Mathematics Subject Classification. 65 D15, 65 K05, 46 N10, 90 C22.