2015
DOI: 10.1007/s10107-015-0934-x
|View full text |Cite
|
Sign up to set email alerts
|

On Lipschitz optimization based on gray-box piecewise linearization

Abstract: We address the problem of minimizing objectives from the class of piecewise differentiable functions whose nonsmoothness can be encapsulated in the absolute value function. They possess local piecewise linear approximations with a discrepancy that can be bounded by a quadratic proximal term. This overestimating local model is continuous but generally nonconvex. It can be generated in its absnormal form by a minor extension of standard algorithmic differentiation tools. Here we demonstrate how the local model c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
12
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 26 publications
1
12
0
Order By: Relevance
“…Let us mention [29] which developed piecewise linear approximation for functions which can be expressed using absolute value, min or max operators. This approach led to successful algorithmic developments [30] but may suffer from a high computational complexity and a lack of versatility (the Euclidean norm cannot be dealt with within this framework). Another attempt using the same model of branching programs was described in [33] where a qualification assumption is used to compute Clarke generalized derivatives automatically 6 .…”
Section: Automatic Differentiationmentioning
confidence: 99%
“…Let us mention [29] which developed piecewise linear approximation for functions which can be expressed using absolute value, min or max operators. This approach led to successful algorithmic developments [30] but may suffer from a high computational complexity and a lack of versatility (the Euclidean norm cannot be dealt with within this framework). Another attempt using the same model of branching programs was described in [33] where a qualification assumption is used to compute Clarke generalized derivatives automatically 6 .…”
Section: Automatic Differentiationmentioning
confidence: 99%
“…However, we will also consider frequently the situation where σ varies over all possibilities {−1, 0, 1} s . As observed already in [10] for the nonlinear case, the limiting gradients of ϕ in the vicinity of x are given by (8) g…”
Section: Proof Starting With the Sharpness We Derive Frommentioning
confidence: 72%
“…This partial ordering of the signature vectors was already used in [10]. Like in the piecewise linear case we can find that the closureS σ of any S σ is contained in the extended closureŜ…”
Section: Proof Starting With the Sharpness We Derive Frommentioning
confidence: 86%
See 2 more Smart Citations