Stochastic gradient methods are scalable for solving large-scale optimization problems that involve empirical expectations of loss functions. Existing results mainly apply to optimization problems where the objectives are one-or two-level expectations. In this paper, we consider the multi-level compositional optimization problem that involves compositions of multi-level component functions and nested expectations over a random path. It finds applications in risk-averse optimization and sequential planning. We propose a class of multi-level stochastic gradient methods that are motivated from the method of multi-timescale stochastic approximation. First we propose a basic T -level stochastic compositional gradient algorithm, establish its almost sure convergence and obtain an n-iteration error bound O(n 1/2 T ). Then we develop accelerated multi-level stochastic gradient methods by using an extrapolation-interpolation scheme to take advantage of the smoothness of individual component functions. When all component functions are smooth, we show that the convergence rate improves to O(n 4/(7+T ) ) for general objectives and O(n 4/(3+T ) ) for strongly convex objectives. We also provide almost sure convergence and rate of convergence results for nonconvex problems. The proposed methods and theoretical results are validated using numerical experiments.
While clustering has been well studied in the past decade, model selection has drawn much less attention due to the difficulty of the problem. In this paper, we address both problems in a joint manner by recovering an ideal affinity tensor from an imperfect input. By taking into account the relationship of the affinities induced by the cluster structures, we are able to significantly improve the affinity input, such as repairing those entries corrupted by gross outliers. More importantly, the recovered ideal affinity tensor also directly indicates the number of clusters and their membership, thus solving the model selection and clustering jointly. To enforce the requisite global consistency in the affinities demanded by the cluster structure, we impose a number of constraints, specifically, among others, the tensor should be low rank and sparse, and it should obey what we call the rank-1 sum constraint. To solve this highly non-smooth and non-convex problem, we exploit the mathematical structures, and express the original problem in an equivalent form amenable for numerical optimization and convergence analysis. To scale to large problem sizes, we also propose an alternative formulation, so that those problems can be efficiently solved via stochastic optimization in an online fashion. We evaluate our algorithm with different applications to demonstrate its superiority, and show it can adapt to a large variety of settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.