2009
DOI: 10.1007/s10589-009-9251-8
|View full text |Cite
|
Sign up to set email alerts
|

A coordinate gradient descent method for ℓ 1-regularized convex minimization

Abstract: In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing 1 -regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general 1 -regularized convex minimization problems, i.e., the problem of minimizing an 1 -re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
77
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(77 citation statements)
references
References 31 publications
0
77
0
Order By: Relevance
“…This method approximates the smooth function by a quadratic function at the current iterate, applies block coordinate descent to generate a feasible descent direction, and then updates the current iterate by performing an inexact line search along the descent direction. Numerical performances in [25,31] suggest that the BCGD method can be effective in practice. We extend this method to solve (8) and, in particular, the covariance selection problems (1) with α = 0 and β = +∞, (5) with ρ ij > 0 for (i, j) ∈ V , and the dual problems (2), (7) with ρ ij > 0 for (i, j) ∈ V .…”
Section: St αI X βImentioning
confidence: 99%
See 1 more Smart Citation
“…This method approximates the smooth function by a quadratic function at the current iterate, applies block coordinate descent to generate a feasible descent direction, and then updates the current iterate by performing an inexact line search along the descent direction. Numerical performances in [25,31] suggest that the BCGD method can be effective in practice. We extend this method to solve (8) and, in particular, the covariance selection problems (1) with α = 0 and β = +∞, (5) with ρ ij > 0 for (i, j) ∈ V , and the dual problems (2), (7) with ρ ij > 0 for (i, j) ∈ V .…”
Section: St αI X βImentioning
confidence: 99%
“…where the third inequality uses the self-adjoint positive definite property ofH k and the last inequality uses (31).…”
Section: Iteration Complexitymentioning
confidence: 99%
“…Some examples include Nyström mehtod [79], [80] and incomplete Cholesky factorization [81], [82]. Some works (e.g., [19]) consider approximations other than (39), but also lead to linear classification problems. A recent study [78] addresses more on training and testing linear SVM after obtaining the low-rank approximation.…”
Section: B Approximation Of Kernel Methods Via Linear Classificationmentioning
confidence: 99%
“…• L1-regularized LR: Most methods solve the primal form, for example, an interior-point method (l1 logreg [37]), (block) coordinate descent methods (BBR [38] and CGD [39]), a quasi-Newton method (OWL-QN [40]), Newton-type methods (GLMNET [41] and LIBLINEAR [22]), and a Nesterov's method (SLEP [42]). Recently, an augmented Lagrangian method (DAL [43]) is proposed for solving the dual problem.…”
Section: A Issues In Finding Suitable Algorithmsmentioning
confidence: 99%
“…Our main idea is first to make block coordinate gradient descent [35], [36] feasible over all subgraph indicators. Then, by setting a small block size to update, Problem (2) with an intractably large number of variables becomes practically solvable.…”
Section: Learning Sparse Linear Models By Block Co-ordinate Gradient mentioning
confidence: 99%