2020
DOI: 10.12732/ijam.v33i5.8
|View full text |Cite
|
Sign up to set email alerts
|

New Two-Step Conjugate Gradient Method for Unconstrained Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…The comparative tests involve eighteen well-known test functions (see Table 1) obtained from [14], [15], [16], [17]. The comparative performances of the algorithms are assessed by taking into account both the total number of iterations and the number of function evaluations, in addition to computational timings.…”
Section: Numerical Results and Conclusionmentioning
confidence: 99%
See 1 more Smart Citation
“…The comparative tests involve eighteen well-known test functions (see Table 1) obtained from [14], [15], [16], [17]. The comparative performances of the algorithms are assessed by taking into account both the total number of iterations and the number of function evaluations, in addition to computational timings.…”
Section: Numerical Results and Conclusionmentioning
confidence: 99%
“…is deemed acceptable provided it satisfies the Wolfe conditions (see [4], [7], [16], [18], [19], and [24])…”
Section: Numerical Results and Conclusionmentioning
confidence: 99%
“…,respectively.Numerically,themost successfulquasi-NewtonmethodistheBFGSmethod (Broyden,1970). Whentheobjectivefunctionfisconvex,globalconvergenceoftheBFGSmethodshasbeen established by some authors (see Byrd, Schnabel and Shultz, 1988;Dai, 2002;Moghrabi, 2019;Tajadod,Abedini,Rategari,&Mobin,2016;Wolfe,1971). Dai(2002)employedillustratedthatthe standardBFGSmethodmayfailonnon-convexfunctionswithinexactlinesearch.…”
Section: ( ) ∈mentioning
confidence: 97%
“…If σ i i H = and ε = 1 , the scalar β i disappears and d i reduces to a memoryless multi-step methodsearchdirectionthatsatisfiestherelationin(4). Moghrabi(2019)proceedswiththechoice σ i i H = .Tocompletetheimplementationdetailsof the algorithm, the quantity d H g…”
Section: ( ) ∈mentioning
confidence: 99%
“…Because of the accumulation of round-off errors, orthogonality only really exists for a few adjacent vectors and convergence difficulties are present for large ill-conditioned systems. Many restart [5] and preconditioning techniques [6][7][8][9][10] improve convergence.…”
Section: Non-recursive Cg-like Algorithm Without the Need To Restartmentioning
confidence: 99%