There has been much recent interest in finding unconstrained local minima of smooth functions, due in part of the prevalence of such problems in machine learning and robust statistics. A particular focus is algorithms with good complexity guarantees. Second-order Newton-type methods that make use of regularization and trust regions have been analyzed from such a perspective. More recent proposals, based chiefly on first-order methodology, have also been shown to enjoy optimal iteration complexity rates, while providing additional guarantees on computational cost.In this paper, we present an algorithm with favorable complexity properties that differs in two significant ways from other recently proposed methods. First, it is based on line searches only: Each step involves computation of a search direction, followed by a backtracking line search along that direction. Second, its analysis is rather straightforward, relying for the most part on the standard technique for demonstrating sufficient decrease in the objective from backtracking. In the latter part of the paper, we consider inexact computation of the search directions, using iterative methods in linear algebra: the conjugate gradient and Lanczos methods. We derive modified convergence and complexity results for these more practical methods.Key words. smooth nonconvex unconstrained optimization, line-search methods, second-order methods, second-order necessary conditions, iteration complexity.AMS subject classifications. 49M05, 49M15, 90C06, 90C60. g −1 H , −3 H iterations [9]. Cubic regularization methods in their basic form [6] have better complexity bounds than trust-region schemes, requiring at most O max −2 g , −3 H iterations. The difference can be explained * Version of December 12, 2017.by the restriction enforced by the trust-region constraint on the norm of the steps. Recent work has shown that it is possible to improve the bound for trust-region algorithms using specific definitions of the trust-region radius [13]. The best known iteration bound for a second-order algorithm (that is, an algorithm relying on the use of second-order derivatives and Newton-type steps) is O max −3/2 g , −3 H . This bound was established originally (under the form of a global convergence rate) in [17], by considering cubic regularization of Newton's method. The same result is achieved by the adaptive cubic regularization framework under suitable assumptions on the computed step [9]. Recent proposals have shown that the same bound can be attained by algorithms other than cubic regularization. A modified trust-region method [11], a variable-norm trust-region scheme [16], and a quadratic regularization algorithm with cubic descent condition [2] all achieve the same bound. When g = H = for some ∈ (0, 1), all the bounds mentioned above reduce to O( −3 ). It has been established that this order is sharp for the class of second-order methods [9], and it can be proved for a wide range of algorithms that make use of second-order derivative information; see [12]. Setting H = 1/2 ...