“…Optimization using derivative‐based algorithms like gradient descent (GD; Zhao, Li, & Irwin, ), least squares (LS; Huang & Chen, ), Levenberg Marquardt (LM; De Canete, Garcia‐Cerezo, García‐Moral, Del Saz, & Ochoa, ; Jang & Mizutani, ), Kalman filter (KF), and its variants (Barragán, Al‐Hadithi, Jiménez, & Andújar, ; Khanesar, Kayacan, Teshnehlab, & Kaynak, ), and the Simplex method (Wang, Li, & Li, ) is dependent on the derivative information, and the updating rules in the case of derivative‐free algorithms such as genetic algorithm (GA; Sarkheyli, Zain, & Sharif, ), particle swarm optimization (PSO; Elloumi, Krid, & Masmoudi, ; Ghomsheh, Shoorehdeli, & Teshnehlab, ; Maldonado, Castillo, & Melin, ), adaptive Bee colony (Habbi, Boudouaoui, Karaboga, & Ozturk, ; Bagis & Konar., ), Ant colony optimization (Juang & Hsu, ; Juang, Hung, & Hsu, ), Search group algorithm (Noorbin & Alfi, ), simulated annealing (SA; Almaraashi et al, ; Almaraashi, John, Hopgood, & Ahmadi, ), and water cycle algorithm (Pahnehkolaei, Alfi, Sadollah, & Kim, ) are not reliant on the functional derivative.…”