We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex optimization from the point of view of worst-case evaluation complexity, improving and generalizing the results of [15,19]. To this aim, we consider a new general class of inexact second-order algorithms for unconstrained optimization that includes regularization and trust-region variations of Newton's method as well as of their linesearch variants. For each method in this class and arbitrary accuracy threshold ǫ ∈ (0, 1), we exhibit a smooth objective function with bounded range, whose gradient is globally Lipschitz continuous and whose Hessian is α−Hölder continuous (for given α ∈ [0, 1]), for which the method in question takes at least ⌊ǫ −(2+α)/(1+α) ⌋ function evaluations to generate a first iterate whose gradient is smaller than ǫ in norm. Moreover, we also construct another function on which Newton's takes ⌊ǫ −2 ⌋ evaluations, but whose Hessian is Lipschitz continuous on the path of iterates. These examples provide lower bounds on the worst-case evaluation complexity of methods in our class when applied to smooth problems satisfying the relevant assumptions. Furthermore, for α = 1, this lower bound is of the same order in ǫ as the upper bound on the worst-case evaluation complexity of the cubic regularization method and other methods in a class of methods proposed in [36] or in [65], thus implying that these methods have optimal worst-case evaluation complexity within a wider class of second-order methods, and that Newton's method is suboptimal.