2020
DOI: 10.19139/soic-2310-5070-480
|View full text |Cite
|
Sign up to set email alerts
|

A New Family of Hybrid Conjugate Gradient Methods for Unconstrained Optimization

Abstract: The conjugate gradient method is a very efficient iterative technique for solving large-scale unconstrainedoptimization problems. Motivated by recent modifications of some variants of the method and construction of hybrid methods, this study proposed four hybrid methods that are globally convergent as well as computationally efficient. The approach adopted for constructing the hybrid methods entails projecting ten recently modified conjugate gradient methods. Each of the hybrid methods is shown to satisfy the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…In addition to their original authors, the issue of global convergence of methods (5) has also been investigated by some researchers like Al-Baali [40] and Gilbert and Nocedal [41]. Likewise, for all the CG directions that are presented in previous paragraph, the authors proved global convergence under necessary line searches techniques such as Armijo [14,16,20,29], week Wolfe-Powell [15-18, 21, 23, 24, 26, 27, 30, 35], strong Wolfe-Powell [12,16,19,22,25,28,31], modifications of these three techniques [13,[32][33][34] or some backtracking algorithms [36][37][38][39].…”
Section: Introductionmentioning
confidence: 93%
See 1 more Smart Citation
“…In addition to their original authors, the issue of global convergence of methods (5) has also been investigated by some researchers like Al-Baali [40] and Gilbert and Nocedal [41]. Likewise, for all the CG directions that are presented in previous paragraph, the authors proved global convergence under necessary line searches techniques such as Armijo [14,16,20,29], week Wolfe-Powell [15-18, 21, 23, 24, 26, 27, 30, 35], strong Wolfe-Powell [12,16,19,22,25,28,31], modifications of these three techniques [13,[32][33][34] or some backtracking algorithms [36][37][38][39].…”
Section: Introductionmentioning
confidence: 93%
“…For example, interested readers can see some modifications of HS method in [12,13], several combinations of FR method in [14][15][16], various developments of PRP method in [17][18][19][20][21], an extended LS method in [22] and variant improvements of DY method in [23][24][25]. Furthermore, some researchers used techniques like quasi-Newton [26][27][28], regularization [29,30], a combination of above methods [31][32][33] or alternative techniques [34,35] and introduced appropriate CG methods to solve optimization problems. Similarly, there exist plenty of CG algorithms that are created to solve systems of nonlinear equations [36][37][38][39].…”
Section: Introductionmentioning
confidence: 99%
“…As such, there is a common need to develop an optimization algorithm that can handle the complexity of scientific and engineering problems. While there is no dearth of optimization methods in the literature [ 2 , 3 ], each has its strengths and weaknesses, ranging from traditional optimization methods to more recent population-based, nature-inspired metaheuristic algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…Conversely, the approximate methods do not guarantee optimal solutions but near-optimal ones in polynomial time. The performance of these methods is measured by how close the found solutions are to the global optimal [ 3 ]. The past decades have witnessed significant interest in approximate methods, which consist of traditional and heuristic or metaheuristic optimization methods.…”
Section: Introductionmentioning
confidence: 99%