We present a general technique for the analysis of first-order methods. The technique relies on the construction of a duality gap for an appropriate approximation of the objective function, where the function approximation improves as the algorithm converges. We show that in continuous time enforcement of an invariant that this approximate duality gap decreases at a certain rate exactly recovers a wide range of first-order continuous-time methods. We characterize the discretization errors incurred by different discretization methods, and show how iteration-complexity-optimal methods for various classes of problems cancel out the discretization error. The techniques are illustrated on various classes of problems -including convex minimization for Lipschitz-continuous objectives, smooth convex minimization, composite minimization, smooth and strongly convex minimization, solving variational inequalities with monotone operators, and convex-concave saddle-point optimization -and naturally extend to other settings. 1 Width-independent algorithms enjoy poly-logarithmic dependence of their convergence times on the constraints matrix width -namely, the ratio between the constraint matrix maximum and minimum non-zero elements. By contrast, standard first-order methods incur (at best) linear dependence on the matrix width, which is not even considered to be polynomial-time convergence [27].