In this paper, we suggest a new method for the numerical solution of fuzzy nonlinear equations in parametric form using a new Conjugate Gradient Technique. Table of the numerical solution is given to show the efficiency of the proposed method and which is compared with classical algorithms such as (Fletcher and Reeves (FR), Polak and Ribiere (PRP), and Fletcher (CD)) techniques.
<span>In this study, we develop a different parameter of three term conjugate gradient kind, this scheme depends principally on pure conjugacy condition (PCC), Whereas, the conjugacy condition (PCC) is an important condition in unconstrained non-linear optimization in general and in conjugate gradient methods in particular. The proposed method becomes converged, and satisfy conditions descent property by assuming some hypothesis, The numerical results display the effectiveness of the new method for solving test unconstrained non-linear optimization problems compared to other conjugate gradient algorithms such as Fletcher and Revees (FR) algorithm and three term Fletcher and Revees (TTFR) algorithm. and as shown in Table (1) from where in a number of iterations and evaluation of function and in Figures (1), (2) and (3) from where in A comparison of the number of iterations, A comparison of the number of times a function is calculated and A comparison of the time taken to perform the functions.</span>
The proposed metaheuristic optimization algorithm based on the two-step Adams-Bashforth scheme (MOABT) was first used in this paper for Multilayer Perceptron Training (MLP). In computer science and mathematical examples, metaheuristic is high-level procedures or guidelines designed to find, devise, or select algorithmic research methods to obtain high-quality solutions to an example problem, especially if the information is insufficient or incomplete, or if computational capacity is limited. Many metaheuristic methods include some stochastic example operations, which means that the resulting solution is dependent on the random variables that are generated during the search. The use of higher evidence can frequently find good solutions with less computational effort than iterative methods and algorithms because it searches a broad range of feasible solutions at the same time. Therefore, metaheuristic is a useful approach to solving example problems. There are several characteristics that distinguish metaheuristic strategies for the research process. The goal is to efficiently explore the search perimeter to find the best and closest solution. The techniques that make up metaheuristic algorithms range from simple searches to complex learning processes. Eight model data sets are used to calculate the proposed approach, and there are five classification data sets and three proximate job data sets included in this set. The numerical results were compared with those of the well-known evolutionary trainer Gray Wolf Optimizer (GWO). The statistical study revealed that the MOABT algorithm can outperform other algorithms in terms of avoiding local optimum and speed of convergence to global optimum. The results also show that the proposed problems can be classified and approximated with high accuracy
The nonlinear conjugate gradient method is an effective technique for solving large-scale minimizations problems, and has a wide range of applications in various fields, such as mathematics, chemistry, physics, engineering and medicine. This study presents a novel spectral conjugate gradient algorithm (non-linear conjugate gradient algorithm), which is derived based on the Hisham–Khalil (KH) and Newton algorithms. Based on pure conjugacy condition The importance of this research lies in finding an appropriate method to solve all types of linear and non-linear fuzzy equations because the Buckley and Qu method is ineffective in solving fuzzy equations. Moreover, the conjugate gradient method does not need a Hessian matrix (second partial derivatives of functions) in the solution. The descent property of the proposed method is shown provided that the step size at meets the strong Wolfe conditions. In numerous circumstances, numerical results demonstrate that the proposed technique is more efficient than the Fletcher–Reeves and KH algorithms in solving fuzzy nonlinear equations.
In a variety of engineering, scientific challenges, mathematics, chemistry, physics, biology, machine learning, deep learning, regression classification, computer science, programming, artificial intelligence, in the military, medical and engineering industries, robotics and smart cars, fuzzy nonlinear equations play a critical role. As a result, in this paper, an Optimization Algorithm based on the Euler Method approach for solving fuzzy nonlinear equations is proposed. In mathematics and computer science, the Euler approach (sometimes called the forward Euler method) is a first-order numerical strategy for solving ordinary differential equations (ODEs) with a specified initial value. The local error is proportional to the square of the step size, while the general error is proportional to the step size, according to the Euler technique. The Euler method is frequently used to create more complicated algorithms. The Optimization Algorithm Based on the Euler Method (OBE) uses the logic of slope differences, which is computed by the Euler approach for global optimizations as a search mechanism for promising logic. Furthermore, the mechanism of the proposed work takes advantage of two active phases: exploration and exploitation to find the most important promising areas within the distinct space and the best solutions globally based on a positive movement towards it. In order to avoid the solution of local optimal and increase the rate of convergence, we use the ESQ mechanism. The optimization algorithm based on the Euler method (OBE) is very efficient in solving fuzzy nonlinear equations and approaches the global minimum and avoids the local minimum. In comparison with the GWO algorithm, we notice a clear superiority of the OBE algorithm in reaching the solution with higher accuracy. We note from the numerical results that the new algorithm is 50 % superior to the GWO algorithm in Example 1, 51 % in Example 2 and 55 % in Example 3.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.