Two promising approaches to coverage path planning are reward-based and pheromone-based methods. Rewardbased methods allow heuristics to be learned automatically, often yielding a superior performance over hand-crafted rules. On the other hand, pheromone-based methods consistently demonstrate superior generalization and adaptation abilities when placed in unfamiliar environments. To obtain the best of both worlds, we introduce Greedy Entropy Maximization (GEM), a hybrid approach that aims to maximize the entropy of a pheromone deposited by a swarm of homogeneous antlike agents. We begin by establishing a sharp upper-bound on achievable entropy and show that this corresponds to optimal dynamic coverage path planning. Next, we demonstrate that GEM closely approaches this upper-bound despite depriving agents of basic necessities such as memory and explicit communication. Finally, we show that GEM can be executed asynchronously in constant-time, enabling it to scale arbitrarily.