Kurdyka-Lojasiewicz (KL) exponent plays an important role in estimating the convergence rate of many contemporary first-order methods. In particular, a KL exponent of 1 2 for a suitable potential function is related to local linear convergence. Nevertheless, KL exponent is in general extremely hard to estimate. In this paper, we show under mild assumptions that KL exponent is preserved via inf-projection. Inf-projection is a fundamental operation that is ubiquitous when reformulating optimization problems via the lift-and-project approach. By studying its operation on KL exponent, we show that the KL exponent is 1 2 for several important convex optimization models, including some semidefinite-programming-representable functions and some functions that involve C 2 -cone reducible structures, under conditions such as strict complementarity. Our results are applicable to concrete optimization models such as group fused Lasso and overlapping group Lasso. In addition, for nonconvex models, we show that the KL exponent of many difference-of-convex functions can be derived from that of their natural majorant functions, and the KL exponent of the Bregman envelope of a function is the same as that of the function itself. Finally, we estimate the KL exponent of the sum of the
In this paper, we study the sequential convex programming method with monotone line search (SCP ls ) in [46] for a class of difference-of-convex (DC) optimization problems with multiple smooth inequality constraints. The SCP ls is a representative variant of moving-ball-approximationtype algorithms [6,10,13,54] for constrained optimization problems. We analyze the convergence rate of the sequence generated by SCP ls in both nonconvex and convex settings by imposing suitable Kurdyka-Lojasiewicz (KL) assumptions. Specifically, in the nonconvex settings, we assume that a special potential function related to the objective and the constraints is a KL function, while in the convex settings we impose KL assumptions directly on the extended objective function (i.e., sum of the objective and the indicator function of the constraint set). A relationship between these two different KL assumptions is established in the convex settings under additional differentiability assumptions. We also discuss how to deduce the KL exponent of the extended objective function from its Lagrangian in the convex settings, under additional assumptions on the constraint functions. Thanks to this result, the extended objectives of some constrained optimization models such as minimizing 1 subject to logistic/Poisson loss are found to be KL functions with exponent 1 2 under mild assumptions. To illustrate how our results can be applied, we consider SCP ls for minimizing 1−2 [60] subject to residual error measured by 2 norm/Lorentzian norm [21]. We first discuss how the various conditions required in our analysis can be verified, and then perform numerical experiments to illustrate the convergence behaviors of SCP ls .
the ratio of \ell 1 and \ell 2 norms was proposed as a sparsity inducing function for noiseless compressed sensing. In this paper, we further study properties of such model in the noiseless setting, and propose an algorithm for minimizing \ell 1 /\ell 2 subject to noise in the measurements. Specifically, we show that the extended objective function (the sum of the objective and the indicator function of the constraint set) of the model in [Y. Rahimi, C. Wang, H. Dong, and Y. Lou, SIAM J. Sci. Comput., 41 (2019), pp. A3649--A3672] satisfies the Kurdyka--\ Lojasiewicz (KL) property with exponent 1/2; this allows us to establish linear convergence of the algorithm proposed in [C. Wang, M. Yan, and Y. Lou, IEEE Trans. Signal Process., 68 (2020), pp. 2660--2669 (see equation 11) under mild assumptions. We next extend the \ell 1 /\ell 2 model to handle compressed sensing problems with noise. We establish the solution existence for some of these models under the spherical section property [
Kurdyka-Lojasiewicz (KL) exponent plays an important role in estimating the convergence rate of many contemporary first-order methods. In particular, a KL exponent of 1 2 is related to local linear convergence. Nevertheless, KL exponent is in general extremely hard to estimate. In this paper, we show under mild assumptions that KL exponent is preserved via inf-projection. Inf-projection is a fundamental operation that is ubiquitous when reformulating optimization problems via the lift-and-project approach. By studying its operation on KL exponent, we show that the KL exponent is 1 2 for several important convex optimization models, including some semidefinite-programming-representable functions and functions that involve C 2 -cone reducible structures, under conditions such as strict complementarity. Our results are applicable to concrete optimization models such as group fused Lasso and overlapping group Lasso. In addition, for nonconvex models, we show that the KL exponent of many difference-of-convex functions can be derived from that of their natural majorant functions, and the KL exponent of the Bregman envelope of a function is the same as that of the function itself. Finally, we estimate the KL exponent of the sum of the least squares function and the indicator function of the set of matrices of rank at most k.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.