Maximizing a convex function over convex constraints is an NP-hard problem in general. We prove that such a problem can be reformulated as an adjustable robust optimization (ARO) problem in which each adjustable variable corresponds to a unique constraint of the original problem. We use ARO techniques to obtain approximate solutions to the convex maximization problem. In order to demonstrate the complete approximation scheme, we distinguish the cases in which we have just one nonlinear constraint and multiple linear constraints. Concerning the first case, we give three examples in which one can analytically eliminate the adjustable variable and approximately solve the resulting static robust optimization problem efficiently. More specifically, we show that the norm constrained log-sum-exp (geometric) maximization problem can be approximated by (convex) exponential cone optimization techniques. Concerning the second case of multiple linear constraints, the equivalent ARO problem can be represented as an adjustable robust linear optimization problem. Using linear decision rules then returns a safe approximation of the constraints. The resulting problem is a convex optimization problem, and solving this problem gives an upper bound on the global optimum value of the original problem. By using the optimal linear decision rule, we obtain a lower bound solution as well. We derive the approximation problems explicitly for quadratic maximization, geometric maximization, and sum-of-max-linear-terms maximization problems with multiple linear constraints. Numerical experiments show that, contrary to the state-of-the-art solvers, we can approximate large-scale problems swiftly with tight bounds. In several cases, we have equal upper and lower bounds, which concludes that we have global optimality guarantees in these cases. Summary of Contribution: Maximizing a convex function over a convex set is a hard optimization problem. We reformulate this problem as an optimization problem under uncertainty, which allows us to transfer the hardness of this problem from its nonconvexity to the uncertainty of the new problem. The equivalent uncertain optimization problem can be relaxed tightly by using adjustable robust optimization techniques. In addition to building a new bridge between convex maximization and robust optimization, this approach also gives us strong algorithms that improve the state-of-the-art optimization solvers both in solution time and quality for various convex maximization problems.
We introduce a solution scheme for portfolio optimization problems with cardinality constraints. Typical portfolio optimization problems are extensions of the classical Markowitz mean-variance portfolio optimization model. We solve such type of problems using a method similar to column generation. In this scheme, the original problem is restricted to a subset of the assets resulting in a master convex quadratic problem. Then the dual information of the master problem is used in a sub-problem to propose more assets to consider. We also consider other extensions to the Markowitz model to diversify the portfolio selection within the given intervals for active weights.
Despite the fact that the measures of water assets are sufficient for the whole world, the appropriation of them in existence shows uneven pattern [1]. Dams play a vital role in reservoir water for use in times of need. Although many of the dams have been built, the shortage of water has always been there. In this paper we are going to elaborate on the reasons for this and the solution. This learning inspects the day by day inflow and outflow of water in Mettur Dam from June 2000 to May 2001 and reveals that whether or not this water is useful for people's livelihood. In this paper, we show that asymptotical of the hypothesis in Gaussian procedures permit us to acquire estimates for the time of break and as well compare with water flow in Mettur Dam.
We study non-convex optimization problems over simplices. We show that for a large class of objective functions, the convex approximation obtained from the Reformulation-Linearization Technique (RLT) admits optimal solutions that exhibit a sparsity pattern. This characteristic of the optimal solutions allows us to conclude that (i) a linear matrix inequality constraint, which is often added to tighten the relaxation, is vacuously satisfied and can thus be omitted, and (ii) the number of decision variables in the RLT relaxation can be reduced from $${\mathcal {O}} (n^2)$$ O ( n 2 ) to $${\mathcal {O}} (n)$$ O ( n ) . Taken together, both observations allow us to reduce computation times by up to several orders of magnitude. Our results can be specialized to indefinite quadratic optimization problems over simplices and extended to non-convex optimization problems over the Cartesian product of two simplices as well as specific classes of polyhedral and non-convex feasible regions. Our numerical experiments illustrate the promising performance of the proposed framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.