This paper deals with a general form of variational problems in Banach spaces which encompasses variational inequalities as well as minimization problems. We prove a characterization of local error bounds for the distance to the (primal-dual) solution set and give a sufficient condition for such an error bound to hold. In the second part of the paper, we consider an algorithm of augmented Lagrangian type for the solution of such variational problems. We give some global convergence properties of the method and then use the error bound theory to provide estimates for the rate of convergence and to deduce boundedness of the sequence of penalty parameters. Finally, numerical results for optimal control, Nash equilibrium problems, and elliptic parameter estimation problems are presented.Lemma 2.1. Assume that g : X → H is concave. If m : H → R is convex and decreasing, then m • g is convex. In particular:by the concavity of g. Applying m on both sides yieldswhere we used the monotonicity and the convexity of m. Hence, m • g is convex. Assertion (a) now follows because d K is decreasing (see above) and convex [4, Cor. 12.12]. Similarly, for (b), the function y → (λ, y) with λ ∈ K • ∞ is obviously a convex function, and it is decreasing because (λ, k) ≤ 0 for all k ∈ K ∞ . Finally, for (c), note that M = {x ∈ X : g(x) ∈ K} = {x ∈ X : d K (g(x)) ≤ 0}.Hence, M is a lower level set of the convex function d K • g and therefore a convex set.