“…In our experiments, the proposed algorithms are extensively validated on three benchmarks commonly used in the literature: (1) the first dataset is called the Carlier dataset, having eight instances with a number of jobs ranging between 7 and 14, and a number of machines at the interval between 4 and 9 [39]; (2) the second is the Reeves dataset with 21 instances, where the number of machines and the number of jobs ranges between 20 and 75, and 5 and 20, respectively [40]; and (3) finally, the third one is known as the Heller and involves two instances with a number of jobs ranging between 20 and 100, and a number of machines of 10, respectively [41]. Those datasets are taken from [42] with some characteristics about the number of jobs and machines, and the best-known makespan z * in Table 3. Furthermore, the proposed algorithms are extensively compared with a number of the well-established optimization algorithms: sine cosine algorithm (SCA) [43], slap swarm algorithm (SSA) [44], whale optimization algorithm (WOA) [34], genetic algorithm (GA), equilibrium optimization algorithm (EOA) [45], marine predators optimization algorithm (MPA) [42], and a hybrid tunicate swarm algorithm (HTSA) [46] integrated with the local search strategy to ensure a fair comparison and verify their efficacy in terms of six performance metrics: average relative error (ARE), worst relative error (WRE), best relative error (BRE), an average of makespan (Avg), standard deviation (SD), and computational cost (Time in milliseconds (ms)).…”