2000
DOI: 10.1016/s0377-0427(99)00276-9
|View full text |Cite
|
Sign up to set email alerts
|

A class of gradient unconstrained minimization algorithms with adaptive stepsize

Abstract: In this paper the development, convergence theory and numerical testing of a class of gradient unconstrained minimization algorithms with adaptive stepsize are presented. The proposed class comprises four algorithms: the ÿrst two incorporate techniques for the adaptation of a common stepsize for all coordinate directions and the other two allow an individual adaptive stepsize along each coordinate direction. All the algorithms are computationally e cient and possess interesting convergence properties utilizing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
49
0
7

Year Published

2001
2001
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 116 publications
(56 citation statements)
references
References 40 publications
0
49
0
7
Order By: Relevance
“…Next we examine approaches to dynamically adapt the rate of learning that are based on optimization methods Magoulas et al, 1997aMagoulas et al, , 1997bVrahatis et al, 2000a;Anastasiadis et al, 2005a;Anastasiadis et al, 2005b]. In the context of unconstrained optimisation, Armijo's modified SD algorithm automatically adapts the rate of convergence [Armijo, 1966].…”
Section: Adaptive Learning Rate Algorithms In An Optimization Contextmentioning
confidence: 99%
“…Next we examine approaches to dynamically adapt the rate of learning that are based on optimization methods Magoulas et al, 1997aMagoulas et al, , 1997bVrahatis et al, 2000a;Anastasiadis et al, 2005a;Anastasiadis et al, 2005b]. In the context of unconstrained optimisation, Armijo's modified SD algorithm automatically adapts the rate of convergence [Armijo, 1966].…”
Section: Adaptive Learning Rate Algorithms In An Optimization Contextmentioning
confidence: 99%
“…where α k that is determined by a line search is the steplength, and d k which determines different line search methods [35,36,37,39,40,43,44,45,46,48,50] is a search direction of f at x k . One of the most effective methods for unconstrained optimization (1.1) is Newton method.…”
Section: Introductionmentioning
confidence: 99%
“…Several choosing techniques have been appearing in the literature (e.g. [15,16,19,20,21]). The main motivation is to accelerate the convergence rate of related descent methods.…”
Section: Introductionmentioning
confidence: 99%