This paper introduces a new metaphor-free metaheuristic called attack-leave optimizer (ALO). As the name suggests, ALO deploys two strategies to find the optimal solution. The central concept of ALO is to intensify guided searches as a required method. Then, a random search is performed only if the guided search fails to improve the current solution. ALO consists of four guided searches and one random search, performed in three phases: two mandatory and one optional. In the first phase, the guided search is conducted with the best global solution as the reference. In the second phase, the guided search is conducted with a randomly selected solution as the reference. The random search is performed in the third phase. Evaluating ALO, it was tested on 23 classic functions and benchmarked against five existing metaheuristics with known shortcomings: Mixed leader-based optimizer (MLBO), slime mould algorithm (SMA), golden search optimizer (GSO), zebra optimization algorithm (ZOA), and coati optimization algorithm (COA). The results indicate that ALO is highly competitive, outperforming MLBO, SMA, GSO, COA, and ZOA in solving 16, 16, 14, 10, and 9 functions respectively, and demonstrating ALO as a promising new metaheuristic.