Two families of derivative-free methods without memory for approximating a simple zero of a nonlinear equation are presented. The proposed schemes have an accelerator parameter with the property that it can increase the convergence rate without any new functional evaluations. In this way, we construct a method with memory that increases considerably efficiency index from81/4≈1.681to121/4≈1.861. Numerical examples and comparison with the existing methods are included to confirm theoretical results and high computational efficiency.
A class of derivative-free methods without memory for approximating a simple zero of a nonlinear equation is presented. The proposed class uses four function evaluations per iteration with convergence order eight. Therefore, it is an optimal three-step scheme without memory based on Kung-Traub conjecture. Moreover, the proposed class has an accelerator parameter with the property that it can increase the convergence rate from eight to twelve without any new functional evaluations. Thus, we construct a with memory method that increases considerably efficiency index from 81/4 ≈ 1.681 to 121/4 ≈ 1.861. Illustrations are also included to support the underlying theory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.