2021
DOI: 10.29020/nybg.ejpam.v14i4.4128
|View full text |Cite
|
Sign up to set email alerts
|

A Descent Four-Term of Liu and Storey Conjugate Gradient Method for Large Scale Unconstrained Optimization Problems

Abstract: The conjugate gradient (CG) method is a useful tool for obtaining the optimum point for unconstrained optimization problems since it does not require a second derivative or its approximations. Moreover, the conjugate gradient method can be applied in many fields such as machine learning, deep learning, neural network, and many others. This paper constructs a four-term conjugate gradient method that satisfies the descent property and convergence properties to obtain the stationary point. The new modification wa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…In addition to their original authors, the issue of global convergence of methods (5) has also been investigated by some researchers like Al-Baali [40] and Gilbert and Nocedal [41]. Likewise, for all the CG directions that are presented in previous paragraph, the authors proved global convergence under necessary line searches techniques such as Armijo [14,16,20,29], week Wolfe-Powell [15-18, 21, 23, 24, 26, 27, 30, 35], strong Wolfe-Powell [12,16,19,22,25,28,31], modifications of these three techniques [13,[32][33][34] or some backtracking algorithms [36][37][38][39].…”
Section: Introductionmentioning
confidence: 93%
See 1 more Smart Citation
“…In addition to their original authors, the issue of global convergence of methods (5) has also been investigated by some researchers like Al-Baali [40] and Gilbert and Nocedal [41]. Likewise, for all the CG directions that are presented in previous paragraph, the authors proved global convergence under necessary line searches techniques such as Armijo [14,16,20,29], week Wolfe-Powell [15-18, 21, 23, 24, 26, 27, 30, 35], strong Wolfe-Powell [12,16,19,22,25,28,31], modifications of these three techniques [13,[32][33][34] or some backtracking algorithms [36][37][38][39].…”
Section: Introductionmentioning
confidence: 93%
“…Over the years, many researchers developed methods (5) and increased their efficiency in theoretical and numerical views. For example, interested readers can see some modifications of HS method in [12,13], several combinations of FR method in [14][15][16], various developments of PRP method in [17][18][19][20][21], an extended LS method in [22] and variant improvements of DY method in [23][24][25]. Furthermore, some researchers used techniques like quasi-Newton [26][27][28], regularization [29,30], a combination of above methods [31][32][33] or alternative techniques [34,35] and introduced appropriate CG methods to solve optimization problems.…”
Section: Introductionmentioning
confidence: 99%
“…Over the years, many researchers developed method (5) and increased their efciency in theoretical and numerical views. For example, interested readers can see some modifcations of the HS method in the study by Faramarzi and Amini [11] and Hu et al [12], several combinations of the FR method in the work by Abubakar et al [13] and Sakai and Iiduka [14], various developments of the PRP method in the study by Mishra et al [15], Wu [16], and Andrei [17], an extended LS method in [18], and variant improvements of the DY method in the study by Deepho et al [19], Zhu et al [20], and Jiang and Jian [21]. Furthermore, some researchers used techniques such as quasi-Newton [22,23], regularization [24][25][26], a combination of above methods [27,28], or alternative techniques [29,30] and introduced appropriate CG methods to solve optimization problems.…”
Section: Introductionmentioning
confidence: 99%