2015
DOI: 10.1007/s10107-015-0875-4
|View full text |Cite
|
Sign up to set email alerts
|

A second-order method for strongly convex $$\ell _1$$ ℓ 1 -regularization problems

Abstract: In this paper a robust second-order method is developed for the solution of strongly convex 1 -regularized problems. The main aim is to make the proposed method as inexpensive as possible, while even difficult problems can be efficiently solved. The proposed approach is a primal-dual Newton Conjugate Gradients (pdNCG) method. Convergence properties of pdNCG are studied and worst-case iteration complexity is established. Numerical results are presented on synthetic sparse least-squares problems and real world m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
65
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
8
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 52 publications
(65 citation statements)
references
References 25 publications
0
65
0
Order By: Relevance
“…Properties of the pseudo-Huber function and its application to compressed sensing problems are described in [16]. The objective function of the resulting convex optimization problem is…”
mentioning
confidence: 99%
“…Properties of the pseudo-Huber function and its application to compressed sensing problems are described in [16]. The objective function of the resulting convex optimization problem is…”
mentioning
confidence: 99%
“…Thus, we assume that CG initialized with zero vector is employed at each Inexact Newton iteration. We will make use of the following technical Lemma that it is proved in [17].…”
Section: Inexact Subsampled Newton Methodsmentioning
confidence: 99%
“…Because all local minimizers of the twicedifferentiable functions f n are global minimizers from [Le and White, 2017, Theorem 10], we can conclude that all corresponding minimizers of f are global minimizers. We use the pseudo-Huber loss [Fountoulakis and Gondzio, 2013], which is twice-differentiable approximation to the absolute value: |x| µ = µ 2 + x 2 − µ. Let θ = (Φ, B, w).…”
Section: Local Minima Are Global Minimamentioning
confidence: 99%