Ant colony optimization (ACO) is a well-known class of swarm intelligence algorithms suitable for solving many NP-hard problems. An important component of such algorithms is a record of pheromone trails that reflect colonies’ experiences with previously constructed solutions of the problem instance that is being solved. By using pheromones, the algorithm builds a probabilistic model that is exploited for constructing new and, hopefully, better solutions. Traditionally, there are two different strategies for updating pheromone trails. The best-so-far strategy (global best) is rather greedy and can cause a too-fast convergence of the algorithm toward some suboptimal solutions. The other strategy is named iteration best and it promotes exploration and slower convergence, which is sometimes too slow and lacks focus. To allow better adaptability of ant colony optimization algorithms we use κ-best, max-κ-best, and 1/λ-best strategies that form the entire spectrum of strategies between best-so-far and iteration best and go beyond. Selecting a suitable strategy depends on the type of problem, parameters, heuristic information, and conditions in which the ACO is used. In this research, we use two representative combinatorial NP-hard problems, the symmetric traveling salesman problem (TSP) and the asymmetric traveling salesman problem (ATSP), for which very effective heuristic information is widely known, to empirically analyze the influence of strategies on the algorithmic performance. The experiments are carried out on 45 TSP and 47 ATSP instances by using the MAX-MIN ant system variant of ACO with and without local optimizations, with each problem instance repeated 101 times for 24 different pheromone reinforcement strategies. The results show that, by using adjustable pheromone reinforcement strategies, the MMAS outperformed in a large majority of cases the MMAS with classical strategies.