The classical convergence analysis of quasi-Newton methods assumes that the function and gradients employed at each iteration are exact. In this paper, we consider the case when there are (bounded) errors in both computations and establish conditions under which a slight modification of the BFGS algorithm with an Armijo-Wolfe line search converges to a neighborhood of the solution that is determined by the size of the errors. One of our results is an extension of the analysis presented in [4], which establishes that, for strongly convex functions, a fraction of the BFGS iterates are good iterates. We present numerical results illustrating the performance of the new BFGS method in the presence of noise.[16]. We view these as less desirable alternatives for reasons discussed in the next section. The line search strategy could also be performed in other ways. For example, in their analysis of a gradient method, Berahas et al. [2], relax the Armijo conditions to take noise into account. We prefer to retain the standard Armijo-Wolfe line search without any modification, as this has practical advantages.The literature of the BFGS method with inaccurate gradients includes the implicit filtering method of Kelley et al. [5,10], which assumes that noise can be diminished at will at any iteration. Deterministic convergence guarantees have been established for that method by ensuring that noise decays as the iterates approach the solution. Dennis and Walker [7] and Ypma [18] study bounded deterioration properties, and local convergence, of quasi-Newton methods with errors, when started near the solution with a Hessian approximation that is close to the exact Hessian. Barton [1] proposes an implementation of the BFGS method in which gradients are computed by an appropriate finite differencing technique, assuming that the noise level in the function evaluation is known. Berahas et al. [2] estimate the noise in the function using Hamming's finite difference technique [9], as extended by Moré and Wild [11], and employ this estimate to compute a finite difference gradient in the BFGS method. They analyze a gradient method with a relaxation of the Armijo condition, and do not study the effects of noise in BFGS updating.There has recently been some interest in designing quasi-Newton methods for machine learning applications using stochastic approximations to the gradient [3,8,12,17]. These papers avoid potential difficulties with BFGS or L-BFGS updating by assuming that the quality of gradient differences is always controlled, and as a result, the analysis follows similar lines as for classical BFGS and L-BFGS.This paper is organized in 5 sections. The proposed algorithm is described in Section 2. Section 3, the bulk of the paper, presents a sequence of lemmas related to the existence of stepsizes that satisfy the Armijo-Wolfe conditions, the beneficial effect of lengthening the differencing interval, the properties of "good iterates", culminating in a global convergence result. Some numerical tests that illustrate the performance ...