In this study, we derive a new scale parameter φ for the CG method, for solving large scale unconstrained optimization algorithms. The new scale parameter φ satisfies the sufficient descent condition, global convergence analysis proved under Strong Wolfe line search conditions. Our numerical results show that the proposed method is effective and robust against some known algorithms.search, known as the line searches [2]. Among them, the so-called strong wolf line search conditions require that [3] [4].
Many researchers are interested for developed and improved the conjugate gradient method for solving large scale unconstrained optimization problems. In this work a new parameter will be presented as a convex combination between RMIL and MMWU. The suggestion method always produces a descent search direction at each iteration. Under Strong Wolfe Powell (SWP) line search conditions, the global convergence of the proposed method is established. The preliminary numerical comparisons with some others CG methods have shown that this new method is efficient and robust in solving all given problems.
<p><span>The quasi-Newton equation is the very foundation of an assortment of </span><span>the quasi-Newton methods for optimization minimization problem. In this paper, we deriving a new quasi-Newton equation based on the second-order Taylor’s series expansion. The global convergence is established underneath suitable conditions and numerical results are reported to show that the given algorithm is more effective than those of the normal BFGS method.</span></p>
Because of its simplicity, low memory requirement, low computational cost, and global convergence properties, the Conjugate Gradient (CG) method is the most popular iterative mathematical technique for optimizing both linear and nonlinear systems. Some classical CG methods, however, have drawbacks such as poor global convergence and numerical performance in terms of iterations and function evaluations. To address these shortcomings, researchers proposed new CG parameter variants with efficient numerical results and good convergence properties. We present a new conjugate gradient formula based on the memoryless self-scale DFP quasi-Newton (QN) method in this paper. The proposed new formula fulfills the sufficient descent property and the global convergent condition with any proposed line research. When the exact line search is used, the proposed formula is reduced to the classical HS formula. Finally, we conclude that our proposed method is effective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.