2017
DOI: 10.1007/978-981-10-4642-1_17
|View full text |Cite
|
Sign up to set email alerts
|

Newton Like Line Search Method Using q-Calculus

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…A modified least mean algorithm using q-calculus was also proposed which automatically adapted the learning rate with respect to the error and was shown to have fast convergence [36]. In optimization, the q-calculus was employed in Newton, modified Newton, BFGS, and limited memory BFGS methods for solving unconstrained nonlinear optimization problems [19,[37][38][39][40] with the least number of iterations. In the field of conjugate gradient methods, the q-analogue of the Fletcher-Reeves method was developed [41] to optimize unimodal and multimodal functions, and the Gaussian perturbations were used in some iterations to ensure the convergence globally in the probabilistic sense only.…”
Section: Introductionmentioning
confidence: 99%
“…A modified least mean algorithm using q-calculus was also proposed which automatically adapted the learning rate with respect to the error and was shown to have fast convergence [36]. In optimization, the q-calculus was employed in Newton, modified Newton, BFGS, and limited memory BFGS methods for solving unconstrained nonlinear optimization problems [19,[37][38][39][40] with the least number of iterations. In the field of conjugate gradient methods, the q-analogue of the Fletcher-Reeves method was developed [41] to optimize unimodal and multimodal functions, and the Gaussian perturbations were used in some iterations to ensure the convergence globally in the probabilistic sense only.…”
Section: Introductionmentioning
confidence: 99%
“…However, the convergence properties of the steepest descent method with inexact line searches have been studied under several strategies for the choice of the step length α k [36][37][38]. Recently, several modified unconstrained optimization algorithms using the q-gradient have been proposed to solve unconstrained optimization problems [10,19,[39][40][41].…”
Section: Introductionmentioning
confidence: 99%
“…Further, global optimum was searched using q-steepest descent method and q-conjugate gradient method where a descent scheme is presented using q-calculus with the stochastic approach which does not focus on the order of convergence of the scheme [33]. The q-calculus is applied in Newton's method to solve unconstrained single objective optimization [34]. Further, this idea is extended to solve (UMOP) within the context of the q-calculus [35].…”
Section: Introductionmentioning
confidence: 99%