The textbook adversary bound for function evaluation [1,5,6] states that to evaluate a function f : D → C with success probability 1 2 +δ in the quantum query model, one needs at least 2δ − √ 1 − 4δ 2 Adv (f ) queries, where Adv (f ) is the optimal value of a certain optimization problem. For δ ≪ 1, this only allows for a bound of θ δ 2 Adv (f ) even after a repetitionand-majority-voting argument. In contrast, the polynomial method can sometimes prove a bound that doesn't converge to 0 as δ → 0. We improve the δ-dependent prefactor and achieve a bound of 2δAdv (f ). The proof idea is to "turn the output condition into an input condition": From an algorithm that transforms perfectly input-independent initial to imperfectly distinguishable final states, we construct one that transforms imperfectly input-independent initial to perfectly distinguishable final states in the same number of queries by projecting onto the "correct" final subspaces and uncomputing. The resulting δ-dependent condition on initial Gram matrices, compared to the original algorithm's condition on final Gram matrices, allows deriving the tightened prefactor.