2016
DOI: 10.17654/ms100111787
|View full text |Cite
|
Sign up to set email alerts
|

Inexact Cg-Method via Sr1 Update for Solving Systems of Nonlinear Equations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 0 publications
0
11
0
Order By: Relevance
“…The computational experiment is based on the number of iterations and CPU time. To ascertain the global convergence of the proposed method, the benchmarks problems in [21,22] was used with two different ISP, and the output of the performance of the methods was based on the performance profile presented by Dolan and More [24]. The performance profile : → [0,1] is defined as follows: Let and be the set of problems and set of solvers respectively.…”
Section: Numerical Resultsmentioning
confidence: 99%
“…The computational experiment is based on the number of iterations and CPU time. To ascertain the global convergence of the proposed method, the benchmarks problems in [21,22] was used with two different ISP, and the output of the performance of the methods was based on the performance profile presented by Dolan and More [24]. The performance profile : → [0,1] is defined as follows: Let and be the set of problems and set of solvers respectively.…”
Section: Numerical Resultsmentioning
confidence: 99%
“…PROBLEM 1 [3] ( ) = ( ) 2 + ( − − 1); = 1,2,3, ⋯ , . 0 = (0.9,0.9,0.9, ⋯ ,0.9) and 0 = (0.7,0.7,0.7, ⋯ ,0.7)…”
Section: List Of Benchmark Test Problem Usedmentioning
confidence: 99%
“…( ) = 0, ∈ ; (Eq 1) where : → is continuously differentiable. Newton and quasi-Newton methods are the most widely used methods to solve such problems because they have very attractive convergence properties and practical application (see [1,2,3,4]). However, they are not usually suitable for large-scale nonlinear systems of equations because they require Jacobian matrix, or an approximation to it, at every iteration while solving optimization problems.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, the search direction is generally required to satisfy the descent condition ∇ ( ) < 0. The derivative-free direction can be obtained in several ways [4,5,7,9,12]. An iterative method that generates a sequence { } satisfying (3) or (5) is called a norm descent method.…”
Section: Introductionmentioning
confidence: 99%