We consider the problem of minimizing a strongly convex function that depends on an uncertain parameter θ. The uncertainty in the objective function means that the optimum, x * (θ), is also a function of θ. We propose an efficient method to compute x * (θ) and its statistics. We use a chaos expansion of x * (θ) along a truncated basis and study first-order methods that compute the optimal coefficients. We establish the convergence rate of the method as the number of basis functions, and hence the dimensionality of the optimization problem is increased. We give the first non-asymptotic rates for the gradient descent and the accelerated gradient descent methods. Our analysis exploits convexity and does not rely on a diminishing step-size strategy. As a result, it is much faster than the state-of-the-art both in theory and in our preliminary numerical experiments. A surprising side-effect of our analysis is that the proposed method also acts as a variance reduction technique to the problem of estimating x * (θ).
We consider the problem of minimising functions represented as a difference of lattice submodular functions. We propose analogues to the SupSub, SubSup and ModMod routines for lattice submodular functions. We show that our majorisation-minimisation algorithms produce iterates that monotonically decrease, and that we converge to a local minimum. We also extend additive hardness results, and show that a broad range of functions can be expressed as the difference of submodular functions.
We consider an extension to Sequential Probability Ratio Tests for when we have uncertain costs, but also opportunity to learn about these in an adaptive manner. In doing so we demonstrate the effects that allowing uncertainty has on observation cost, and the costs associated with Type I and Type II error. The value of information relating to modelled uncertainties is derived and the case of statistical dependence between the parameter affecting decision outcome and the parameter affecting unknown cost is also examined. Numerical examples of the derived theory are provided, along with a simulation comparing this adaptive learning framework to the classical one.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.