International audienceIn this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments. Read More: http://epubs.siam.org/doi/abs/10.1137/07070427
A new recursive algorithm of stochastic approximation type with the averaging of trajectories is investigated. Convergence with probability one is proved for a variety of classical optimization and identification problems. It is also demonstrated for these problems that the proposed algorithm achieves the highest possible rate of convergence.
A nonlinear black box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics. There has been considerable recent interest in this area with structures based on neural networks, radial basis networks, wavelet networks, hinging hyperplanes, as well as wavelet transform based methods and models based on fuzzy sets and fuzzy rules. This paper describes all these approaches in a common framework, from a user's perspective. It focuses on what are the common features in the di erent approaches, the choices that have to be made and what considerations are relevant for a successful system identi cation application of these techniques. It is pointed out that the nonlinear structures can be seen as a concatenation of a mapping from observed data to a regression vector and a nonlinear mapping from the regressor space to the output space. These mappings are discussed separately. The latter mapping is usually formed as a basis function expansion. The basis functions are typically formed from one simple scalar function which is modi ed in terms of scale and location. The expansion from the scalar argument to the regressor space is achieved by a radial or a ridge type approach. Basic techniques for estimating the parameters in the structures are criterion minimization, as well as two step procedures, where rst the relevant basis functions are determined, using data, and then a linear least squares step to determine the coordinates of the function approximation. A particular problem is to deal with the large number of potentially necessary parameters. This is handled by making the number of \used" parameters considerably less than the number of \o ered" parameters, by regularization, shrinking, pruning or regressor selection. A more mathematically comprehensive treatment i s g i v en in a companion paper (Juditsky et al., 1995).
In this paper we consider iterative methods for stochastic variational inequalities (s.v.i.) with monotone operators. Our basic assumption is that the operator possesses both smooth and nonsmooth components. Further, only noisy observations of the problem data are available. We develop a novel Stochastic Mirror-Prox (SMP) algorithm for solving s.v.i. and show that with the convenient stepsize strategy it attains the optimal rates of convergence with respect to the problem parameters. We apply the SMP algorithm to Stochastic composite minimization and describe particular applications to Stochastic Semidefinite Feasibility problem and deterministic Eigenvalue minimization.1. Introduction. Variational inequalities with monotone operators form a convenient framework for unified treatment (including algorithmic design) of problems with "convex structure", like convex minimization, convexconcave saddle point problems and convex Nash equilibrium problems. In this paper we utilize this framework to develop first order algorithms for stochastic versions of the outlined problems, where the precise first order information is replaced with its unbiased stochastic estimates. This situation arises naturally in convex Stochastic Programming, where the precise first order information is unavailable (see examples in section 4). In some situations, e.g. those considered in [4, Section 3.3] and in Section 4.4, where passing from available, but relatively computationally expensive precise first order information to its cheap stochastic estimates allows to accelerate the solution process, with the gain from randomization growing progressively with problem's sizes.Our "unifying framework" is as follows. Let Z be a convex compact set in Euclidean space E with inner product ·, · , · be a norm on E (not
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.