An algorithm for solving smooth nonconvex optimization problems is proposed that, in the worst-case, takes O(ε −3/2 ) iterations to drive the norm of the gradient of the objective function below a prescribed positive real number ε and can take O(ε −3 ) iterations to drive the leftmost eigenvalue of the Hessian of the objective above −ε. The proposed algorithm is a general framework that covers a wide range of techniques including quadratically and cubically regularized Newton methods, such as the Adaptive Regularisation using Cubics (ARC) method and the recently proposed Trust-Region Algorithm with Contractions and Expansions (TRACE). The generality of our method is achieved through the introduction of generic conditions that each trial step is required to satisfy, which in particular allow for inexact regularized Newton steps to be used. These conditions center around a new subproblem that can be approximately solved to obtain trial steps that satisfy the conditions. A new instance of the framework, distinct from ARC and TRACE, is described that may be viewed as a hybrid between quadratically and cubically regularized Newton methods. Numerical results demonstrate that our hybrid algorithm outperforms a cublicly regularized Newton method.Second, they do not consider second-order convergence or complexity properties, although they might be able to do so by incorporating second-order conditions similar to ours. Third, they focus on strategies for identifying an appropriate value for the regularization parameter. An implementation of our method might consider their proposals, but could employ other strategies as well. In any case, overall, we believe that our papers are quite distinct, and in some ways are complementary.ORGANIZATION In §2, we present our general framework that is formally stated as Algorithm 1. In §3, we prove that our framework enjoys first-order convergence (see §3.1), an optimal first-order complexity (see §3.2), and certain second-order convergence and complexity guarantees (see §3.3).In §4, we show that ARC and TRACE can be viewed as special cases of our framework, and present yet another instance that is distinct from these methods. In §5, we present details of implementations of a cubic regularization method and our newly proposed instance of our framework, and provide the results of numerical experiments with both. Finally, in §6, we present final comments.NOTATION We use R + to denote the set of nonnegative scalars, R ++ to denote the set of positive scalars, and N + to denote the set of nonnegative integers. Given a real symmetric matrix A, we write A 0 (respectively, A 0) to indicate that A is positive semidefinite (respectively, positive definite). Given a pair of scalars (a, b) ∈ R × R, we write a ⊥ b to indicate that ab = 0. Similarly, given such a pair, we denote their maximum as max{a, b} and their minimum as min{a, b}. Given a vector v, we denote its (Euclidean) 2 -norm as v . Finally, given a discrete set S , we denote its cardinality by |S |.