In the balanced allocations framework, the goal is to allocate m balls into n bins, so as to minimize the gap (difference of maximum to average load). The One-Choice process allocates each ball to a randomly sampled bin and achieves w.h.p. a Θ( (m/n) • log n) gap. The Two-Choice process allocates to the least loaded of two randomly sampled bins, and achieves w.h.p. a log 2 log n + Θ(1) gap. Finally, the (1 + β) process mixes between these two processes with probability β ∈ (0, 1), and achieves w.h.p. an Θ(log n/β) gap.We focus on the outdated information setting of [5], where balls are allocated in batches of size b. For almost the entire range b ∈ [1, O(n log n)], it was shown in [18] that Two-Choice achieves w.h.p. the asymptotically optimal gap and for b = Ω(n log n) it was shown in [16] that it achieves w.h.p. a Θ(b/n) gap.In this work, we establish that the (1 + β) process for appropriately chosen β, achieves w.h.p. the asymptotically optimal gap of O( (b/n) • log n) for any b ∈ [2n log n, n 3 ]. This not only proves the surprising phenomenon that allocating greedily based on Two-Choice is not the best, but also that mixing two processes (One-Choice and Two-Choice) leads to a process with a gap that is better than both. Furthermore, the upper bound on the gap applies to a larger family of processes and continues to hold in the presence of weights sampled from distributions with bounded MGFs.