The 2D soccer simulation league is one of the best test beds for the research of artificial intelligence (AI). It has achieved great successes in the domain of multi-agent cooperation and machine learning. However, the problem of integral offensive strategy has not been solved because of the dynamic and unpredictable nature of the environment. In this paper, we present a novel offensive strategy based on multi-group ant colony optimization (MACO-OS). The strategy uses the pheromone evaporation mechanism to count the preference value of each attack action in different environments, and saves the values of success rate and preference in an attack information tree in the background. The decision module of the attacker then selects the best attack action according to the preference value. The MACO-OS approach has been successfully implemented in our 2D soccer simulation team in RoboCup competitions. The experimental results have indicated that the agents developed with this strategy, along with related techniques, delivered outstanding performances.