This paper addresses the dispatch decision problem faced by an urban consolidation center. The center receives orders according to a stochastic arrival process, and dispatches them for the last-mile distribution in batches. The operator of the center aims to find the cost-minimizing consolidation policy, depending on the orders at hand, pre-announced orders, and stochastic arrivals. We present this problem as a variant of the Delivery Dispatching Problem that includes dispatch windows, and model it as a Markov decision problem. For toy-sized instances, we solve this model to optimality. Through numerical experiments on these instances, we show that we approximate the optimal values with an error of less than 2%. Larger instances suffer from intractably large state-, outcome-, and action spaces. We propose an Approximate Dynamic Programming (ADP) algorithm that can handle such instances, using value function approximation to estimate the downstream costs. To cope with large action spaces -with sizes up to 2 120 in our experiments -we formulate an integer linear program to be used within our ADP algorithm. To evaluate the performance of our ADP policies, we test against various benchmark policies, including a lookahead policy based on scenario sampling. We test the performance of ADP on a variety of networks. When the dispatching problem provides sufficient flexibility in dispatch times, ADP outperforms our myopic benchmark policies by more than 15%, and lookahead policies by over 10%. * Corresponding author. Address: