2019
DOI: 10.1137/18m1226282
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization

Abstract: A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is β-Hölder continuous. It features a very flexible adaptive mechanism for determining the inexactness which is allowed, at each iteration, when com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
68
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 47 publications
(68 citation statements)
references
References 32 publications
0
68
0
Order By: Relevance
“…These sample sizes are adaptively chosen by the procedure in order to satisfy ( 10)-( 12) with probability at least 1 − t. We underline that the enforcement of ( 10)-( 11) on function evaluations is relaxed in our experiments and that such accuracy requirements are now supposed to hold in probability. Given the prefixed absolute accuracies ν ℓ,k for the derivative of order ℓ at iteration k and t a prescribed probability of failure, by using results in [40] and [6], the approximations given in ( 17)-( 19) satisfy (ℓ ∈ {1, 2}):…”
Section: A Implementation Issues and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…These sample sizes are adaptively chosen by the procedure in order to satisfy ( 10)-( 12) with probability at least 1 − t. We underline that the enforcement of ( 10)-( 11) on function evaluations is relaxed in our experiments and that such accuracy requirements are now supposed to hold in probability. Given the prefixed absolute accuracies ν ℓ,k for the derivative of order ℓ at iteration k and t a prescribed probability of failure, by using results in [40] and [6], the approximations given in ( 17)-( 19) satisfy (ℓ ∈ {1, 2}):…”
Section: A Implementation Issues and Resultsmentioning
confidence: 99%
“…The nonnegative constants {κ f,ℓ } 2 ℓ=0 in ( 17)-( 19) should be such that (see, e.g., [6]), for x ∈ R n and all ℓ ∈ {0, 1, 2}, max i∈{1,...,N } ∇ ℓ f i (x) ≤ κ f,ℓ (x). Since their estimations can be challenging, we consider here a constant κ def = κ f,ℓ , for all ℓ ∈ {0, 1, 2}, setting its value experimentally, in order to control the growth of the sample sizes ( 23)- (25) throughout the running of the algorithm.…”
Section: A Implementation Issues and Resultsmentioning
confidence: 99%
“…In a DFO context, this framework is the basis of [19], and a similar approach was considered in [15] in the context of analyzing protein structures. This framework has also been recently extended in a derivative-based context to higherorder regularization methods [7,26]. We also note that there has been some work on multilevel and multi-fidelity models (in both a DFO and derivative-based context), where an expensive objective can be approximated by surrogates which are cheaper to evaluate [11,34].…”
Section: Derivative-free Optimizationmentioning
confidence: 96%
“…where b 0 (x) and b 1 (x) are realizations of random "batches", that is randomly selected (3) subsets of {1, . .…”
Section: A Subsampling Examplementioning
confidence: 99%