2004
DOI: 10.1007/s10107-004-0552-5
|View full text |Cite
|
Sign up to set email alerts
|

Smooth minimization of non-smooth functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

24
2,254
0
14

Year Published

2009
2009
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 2,117 publications
(2,292 citation statements)
references
References 7 publications
24
2,254
0
14
Order By: Relevance
“…First, when all φ ℓ are smooth, so is φ, in contrast to the objective in (7) which typically is nonsmooth; this makes the saddle point reformulation better suited for processing by first order algorithms. Starting with the breakthrough paper of Nesterov [12], this phenomenon, in its general form, is utilized in the fastest known so far first order algorithms for "well-structured" nonsmooth convex programs. Second, in the stochastic case, stochastic oracles providing unbiased estimates of the first order information on φ i oracles, while not induce a similar oracle for the objective of (7), do induce such an oracle for the v.i.…”
Section: Composite Optimization Problem and Its Saddle Point Reformulmentioning
confidence: 99%
See 1 more Smart Citation
“…First, when all φ ℓ are smooth, so is φ, in contrast to the objective in (7) which typically is nonsmooth; this makes the saddle point reformulation better suited for processing by first order algorithms. Starting with the breakthrough paper of Nesterov [12], this phenomenon, in its general form, is utilized in the fastest known so far first order algorithms for "well-structured" nonsmooth convex programs. Second, in the stochastic case, stochastic oracles providing unbiased estimates of the first order information on φ i oracles, while not induce a similar oracle for the objective of (7), do induce such an oracle for the v.i.…”
Section: Composite Optimization Problem and Its Saddle Point Reformulmentioning
confidence: 99%
“…In fact, the "best approximations" available are given by Robust Stochastic Approximation (see [4] and references therein) with the guaranteed rate of convergence O(1) L+M +σ √ t and extra-gradient-type algorithms for solving deterministic monotone v.i. 's with Lipschitz continuous operators (see [9,[12][13][14]), which attain the accuracy O(1) L t in the case of M = σ = 0 or O(1) M √ t when L = σ = 0. The goal of this paper is to demonstrate that a specific Mirror-Prox algorithm [9] for solving monotone v.i.…”
mentioning
confidence: 99%
“…Finally, smooth approximations to a wide class of non-smooth convex functions are available thanks to the recent developments in the non-smooth convex optimization literature (see e.g. Nesterov, 2005;Beck & Teboulle, 2012). Using these results, we provide functional forms of smooth approximations to some of the commonly used test statistics.…”
Section: Regularization Of Test Statisticsmentioning
confidence: 99%
“…While the idea of regularization appears in some other contexts, such as Haile & Tamer (2003), Chernozhukov, Kocatulum & Menzel (2015) and Masten & Poirier (2017), we formally show that this approach has uniform validity in the context of inference with simulated variables. Our regularization method is based on the class of µ-smooth approximations studied in the non-smooth optimization literature (Nesterov, 2005;Beck & Teboulle, 2012). We provide conditions on the choice of approximating functions and regularization parameters that ensure the uniform validity of an inference procedure that combines the proposed regularization scheme with a straightforward bootstrap resampling.…”
Section: Introductionmentioning
confidence: 99%
“…This type of smoothing has been proposed by many authors for solving convex finite minimax problems, in particular by Bertsekas [7], Ben-Tal and Teboulle [6], Alvarez [1], and Nesterov [18]. This smoothing approach has been also proposed by Polak-Royset-Womersley [20], by Sheu-Wu [27] for finite min-max problems subject to infinitely many linear constraints and, more recently, by Sheu-Lin [26] for continuous min-max problems, motivated by the global approach of Fang-Wu [12] using an integral analog.…”
Section: Introductionmentioning
confidence: 99%