2020
DOI: 10.1007/s10957-020-01793-9
|View full text |Cite
|
Sign up to set email alerts
|

Convergent Inexact Penalty Decomposition Methods for Cardinality-Constrained Problems

Abstract: In this manuscript, we consider the problem of minimizing a smooth function with cardinality constraint, i.e., the constraint requiring that the "Equation missing"-norm of the vector of variables cannot exceed a given threshold value. A well-known approach of the literature is represented by the class of penalty decomposition methods, where a sequence of penalty subproblems, depending on the original variables and new variables, are inexactly solved by a two-block decomposition method. The inner iterates of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 28 publications
1
7
0
Order By: Relevance
“…G(x) ∈ C, x ∈ D, (1.1) where f : X → R and G : X → Y are continuously differentiable mappings, X and Y are Euclidean spaces, i.e., real and finite-dimensional Hilbert spaces, C ⊆ Y is nonempty, closed, and convex, whereas D ⊆ X is only assumed to be nonempty and closed (not necessarily convex), representing a possibly complicated set, for which, however, a projection operation is accessible. This very general setting (analyzed for example in [1]) covers, for example, standard nonlinear programming problems with convex constraints, but also difficult disjunctive programming problems [2][3][4][5], e.g., complementarity [6], vanishing [7], switching [8] and cardinality constrained [9,10] problems. Matrix optimization problems such as low-rank approximation [11,12] are also captured by our setting.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…G(x) ∈ C, x ∈ D, (1.1) where f : X → R and G : X → Y are continuously differentiable mappings, X and Y are Euclidean spaces, i.e., real and finite-dimensional Hilbert spaces, C ⊆ Y is nonempty, closed, and convex, whereas D ⊆ X is only assumed to be nonempty and closed (not necessarily convex), representing a possibly complicated set, for which, however, a projection operation is accessible. This very general setting (analyzed for example in [1]) covers, for example, standard nonlinear programming problems with convex constraints, but also difficult disjunctive programming problems [2][3][4][5], e.g., complementarity [6], vanishing [7], switching [8] and cardinality constrained [9,10] problems. Matrix optimization problems such as low-rank approximation [11,12] are also captured by our setting.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, lots of studies have been published that deal with problems with this structure, where the feasible set consists of the intersection of a collection of analytical constraints and a complicated, irregular set, manageable, for example, by easy projections. In particular, approaches based on decomposition and sequential penalty or augmented Lagrangian methods have been proposed for the convex case [13], the cardinality constrained case [10,14] and the low-rank approximation case [15]; the recurrent idea in all these works consists of the application of the variable splitting technique [16,17], to then define a penalty function associated with the differentiable constraints and the additional equality constraint linking the two blocks of variables and finally solve the problem by a sequential penalty method. The optimization of the penalty function is carried out by a two-block alternating minimization scheme [18], which can be run in an exact [14,15] or inexact [10,13] fashion.…”
Section: Introductionmentioning
confidence: 99%
“…In addition to developing these optimality conditions, the authors also apply them to analyze the convergence properties of an alternating projection method for 0 -min(Ax = b) and a penalty decomposition method for 0 -cons(f , k, X ) and 0 -reg(ρ, X ), see also Section 4.4. The authors of [224] employ a similar penalty decomposition method for 0 -cons(f , k, R n ) with an emphasis on possibly nonconvex objective functions f , and in [304] a penalty decomposition-type algorithm is tailored to cardinality-constrained portfolio problems. Similar optimality conditions also form the basis of [189], where the authors consider regularized linear regression problems and combine a cyclic coordinate descent algorithm with local combinatorial optimization to escape local minima.…”
Section: Other Relaxations Of Cardinality Constraintsmentioning
confidence: 99%
“…The approaches proposed in the literature for the solution of problem (1.1) include: exact methods (see, e.g., [6,7,28,29]) typically based on branch-and-bound or branch-and-cut strategies; methods that handle suitable reformulations of the problem based on orthogonality constraints (see, e.g., [9,10,11,13]); penalty decomposition methods, where penalty subproblems are solved by a block coordinate descent method [20,23]; methods that identify points satisfying tailored optimality conditions related to the problem [3,4]; heuristics like evolutionary algorithms [1], particle swarm methods [8,15], genetic algorithms, tabu search and simulated annealing [14], and also neural networks [18].…”
mentioning
confidence: 99%
“…The inherently combinatorial flavor of the given problem makes the definition of proper optimality conditions and, consequently, the development of algorithms that generate points satisfying those conditions a challenging task. A number of ways to address these issues are proposed in the literature (see, e.g., [3,4,11,20,23]). However, some of the optimality conditions proposed do not fully take into account the combinatorial nature of the problem, whereas some of the corresponding algorithms [3,23] require to exactly solve a sequence of nonconvex subproblems and this may be practically prohibitive.…”
mentioning
confidence: 99%