Ant colony optimization (ACO) is a successful method for solving difficult combinatorial optimization problems. Following Ant System, the first ACO algorithm, a large number of algorithmic variants have been developed that showed significantly better performance on a wide range of optimization problems. Typically, performance was measured according to the solution quality achieved for a given computation time limit, which usually allowed the evaluation of a very large number of candidate solutions, often in the range of millions. However, there are practical applications where the number of evaluations that can be done is very restricted due to tight real-time constraints or to the high computational cost of evaluating a solution. Since these situations are quite different from those for which ACO algorithms were initially designed, current knowledge on good parameter settings or the most promising search strategies may not be directly applicable. In this paper, we examine the performance of different ACO algorithms under a strongly limited budget of 1000 evaluations. We do so using default parameter settings from the literature and parameter settings tuned for the limited-budget scenario. In addition, we compare the performance of the ACO algorithms to algorithms that make use of surrogate modeling of the search landscapes. We show that tuning algorithms for the limited-budget case is of uttermost importance, that direct search through the ACO algorithms keeps an edge over techniques using surrogate modeling, and that the ACO variants proposed as improvements over Ant System remain preferable.