2019
DOI: 10.1134/s0965542519070078
|View full text |Cite
|
Sign up to set email alerts
|

Fast Gradient Descent for Convex Minimization Problems with an Oracle Producing a (δ, L)-Model of Function at the Requested Point

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 14 publications
0
11
0
Order By: Relevance
“…It it shown in [31] that in order to construct the classical Frank-Wolfe method instead of an auxiliary problem φ k+1 (x) = α k+1 ψ δk (x, y k+1 ) + V [u k ](x) in Algorithm 2 for m = 0 and µ = 0 (see also section 3, [31]) we can take an auxiliary problem φ k+1 (x) = α k+1 ψ δk (x, y k+1 ). Let us look at this substitution from the view of δ k -precision from Definition 2.11.…”
Section: Universal Conditional Gradient (Frank-wolfe) Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…It it shown in [31] that in order to construct the classical Frank-Wolfe method instead of an auxiliary problem φ k+1 (x) = α k+1 ψ δk (x, y k+1 ) + V [u k ](x) in Algorithm 2 for m = 0 and µ = 0 (see also section 3, [31]) we can take an auxiliary problem φ k+1 (x) = α k+1 ψ δk (x, y k+1 ). Let us look at this substitution from the view of δ k -precision from Definition 2.11.…”
Section: Universal Conditional Gradient (Frank-wolfe) Methodsmentioning
confidence: 99%
“…Note that in Definition 2.1 we allow L to depend on δ. Definition 2.1 is a generalization of (δ, L)-model from [29,31,65], where µ = 0 and m = 0. Further, we denote (δ, L, 0, 0, V )-model as (δ, L)-model.…”
Section: Inexact Model In Minimization Problems Definitions and Examplesmentioning
confidence: 99%
See 1 more Smart Citation
“…For the weight ratio assignment of multi-objective weighting function, the weight ratio assignment in this paper is shown in formula (23).…”
Section: Control Strategy and Optimization Of Hessmentioning
confidence: 99%
“…The traditional method for global optimization is a numerical optimization method based on gradient information, which is discussed in reference. [23][24][25][26] The optimization idea of the gradient descent method is to use the negative gradient direction of the current position as the search direction. The closer the gradient descent method is to the target value, the smaller the step size and the slower the advance.…”
Section: Control Strategy and Optimization Of Hessmentioning
confidence: 99%