We propose a simple projection and rescaling algorithm to solve the feasibility problem find x ∈ L ∩ Ω, where L and Ω are respectively a linear subspace and the interior of a symmetric cone in a finite-dimensional vector space V . This projection and rescaling algorithm is inspired by previous work on rescaled versions of the perceptron algorithm and by Chubanov's projectionbased method for linear feasibility problems. As in these predecessors, each main iteration of our algorithm contains two steps: a basic procedure and a rescaling step. When L ∩ Ω = ∅, the projection and rescaling algorithm finds a pointis attained when L ∩ Ω contains the center of the symmetric cone Ω.We describe several possible implementations for the basic procedure including a perceptron scheme and a smooth perceptron scheme. The perceptron scheme requires O(r 4 ) perceptron updates and the smooth perceptron scheme requires O(r 2 ) smooth perceptron updates, where r stands for the Jordan algebra rank of V . IntroductionWe propose a simple algorithm based on projection and rescaling operations to solve the feasibility problem findwhere L and Ω are respectively a linear subspace and the interior of a symmetric cone in a finite-dimensional vector space V . Problem (1) is fundamental in optimization as it encompasses a large class of feasibility problems. For example, for A ∈ R m×n and b ∈ R m , the problem Ax = b, x > 0 can be formulated as (1) by taking L = {(x, t) ∈ R n+1 : Ax − tb = 0} and Ω = R n+1 ++ . For A ∈ R m×n , c ∈ R n , the problem A
Large-scale constrained convex optimization problems arise in several application domains. First-order methods are good candidates to tackle such problems due to their low iteration complexity and memory requirement. The level-set framework extends the applicability of first-order methods to tackle problems with complicated convex objectives and constraint sets. Current methods based on this framework either rely on the solution of challenging subproblems or do not guarantee a feasible solution, especially if the procedure is terminated before convergence. We develop a level-set method that finds an-relative optimal and feasible solution to a constrained convex optimization problem with a fairly general objective function and set of constraints, maintains a feasible solution at each iteration, and only relies on calls to first-order oracles. We establish the iteration complexity of our approach, also accounting for the smoothness and strong convexity of the objective function and constraints when these properties hold. The dependence of our complexity on is similar to the analogous dependence in the unconstrained setting, which is not known to be true for level-set methods in the literature. Nevertheless, ensuring feasibility is not free. The iteration complexity of our method depends on a condition number, while existing level-set methods that do not guarantee feasibility can avoid such dependence. We numerically validate the usefulness of ensuring a feasible solution path by comparing our approach with an existing level set method on a Neyman-Pearson classification problem.
The perceptron algorithm, introduced in the late fifties in the machine learning community, is a simple greedy algorithm for finding a solution to a finite set of linear inequalities. The algorithm's main advantages are its simplicity and noise tolerance. The algorithm's main disadvantage is its slow convergence rate. We propose a modified version of the perceptron algorithm that retains the algorithm's original simplicity but has a substantially improved convergence rate.
Approximate linear programs (ALPs) are well-known models for computing value function approximations (VFAs) of intractable Markov decision processes (MDPs). VFAs from ALPs have desirable theoretical properties, define an operating policy, and provide a lower bound on the optimal policy cost. However, solving ALPs near-optimally remains challenging, for example, when approximating MDPs with nonlinear cost functions and transition dynamics or when rich basis functions are required to obtain a good VFA. We address this tension between theory and solvability by proposing a convex saddle-point reformulation of an ALP that includes as primal and dual variables, respectively, a vector of basis function weights and a constraint violation density function over the state-action space. To solve this reformulation, we develop a proximal stochastic mirror descent (PSMD) method that learns regions of high ALP constraint violation via its dual update. We establish that PSMD returns a near-optimal ALP solution and a lower bound on the optimal policy cost in a finite number of iterations with high probability. We numerically compare PSMD with several benchmarks on inventory control and energy storage applications. We find that the PSMD lower bound is tighter than a perfect information bound. In contrast, the constraint-sampling approach to solve ALPs may not provide a lower bound, and applying row generation to tackle ALPs is not computationally viable. PSMD policies outperform problem-specific heuristics and are comparable or better than the policies obtained using constraint sampling. Overall, our ALP reformulation and solution approach broadens the applicability of approximate linear programming. This paper was accepted by Yinyu Ye, optimization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.