2012
DOI: 10.1080/02331934.2012.745529
|View full text |Cite
|
Sign up to set email alerts
|

Globally convergent algorithms for solving unconstrained optimization problems

Abstract: New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or Quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also the proposed algorithms are compared with some existing s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…This type approach analyses the relationship among every characteristic and the category for every instance to derive a conditional hazard chance for the relationships between the features utility and the class. At the training stage, the possibility of all magnificence is evaluated by counting how frequently it happens inside the databank which is already trained [25]. NB is speedy, easy to put into effect with the easy shape and fruitful.…”
Section: Nb Classifiermentioning
confidence: 99%
“…This type approach analyses the relationship among every characteristic and the category for every instance to derive a conditional hazard chance for the relationships between the features utility and the class. At the training stage, the possibility of all magnificence is evaluated by counting how frequently it happens inside the databank which is already trained [25]. NB is speedy, easy to put into effect with the easy shape and fruitful.…”
Section: Nb Classifiermentioning
confidence: 99%
“…In calculations, we take η = 10 −3 , ϑ = 1.1, δ = 10 −3 , ω = 10 −10 , ϖ = 10 10 for the CGN based on [40] and we set µ = 10 3 , ε = 0.1, ρ 0 = 2 10 for the proposed algorithm.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…In this case, we choose an optimal network from all possible combinations of removing some arcs in the existing cycles. In the second case, when we have a large number of cycles, we apply the global optimization algorithm AGOP introduced in [24,25] in conjunction with the recently developed local optimization algorithm CGN [39,40]. AGOP is an efficient algorithm in solving many difficult practical problems where objective functions were discontinuous [19,26] and even piecewise constant [42].…”
Section: Introductionmentioning
confidence: 99%
“…Although theoretically, Newton method has a fast convergence rate [8], it is practically also related with the difficulty of finding its second derivative when involving the largescale problems and it has possible failure to converge to the solution of problem (1) from weak initial points. Therefore, in practice, many researchers tend to modify or combine it with another method for solving large-scale unconstrained optimization to make it more computationally efficient and to achieve an accurate solution [9][10][11][12][13][14]. Bouaricha et al in [9] presented a Newton method with trust region for finding (1) min ∈ℝ n f , large-scale unconstrained optimization problems and used two types of an iterative method which are the incomplete Cholesky decomposition and conjugate gradient method to obtain the trust region step instead of using the classical Newton direction.…”
Section: Introductionmentioning
confidence: 99%
“…Next, Grapsa [12] proposed a modified Newton's direction method using a proper gradient's vector modification to have a descent property without a line search technique for solving problems of unconstrained optimization and proved its rate of convergence based on a Dennis modified Newton-Kantorovich theorem. Taheri et al [13] developed a new algorithm for solving unconstrained optimization problems by combining the anti-gradient direction with the Newton direction to improve the convergence rate of Newton's method. Abiodun and Adelabu [14] recently presented two iterative modifications of Newton's method using an updated formula based on the recurrence of matrix factorizations to replaces an inverse matrix and maintains its positive definiteness property.…”
Section: Introductionmentioning
confidence: 99%