We give an asymptotic development of the maximum likelihood estimator (MLE), or any other estimator defined implicitly, in a way which involves the limiting behavior of the score and its higher-order derivatives. This development, which is explicitly computable, gives some insights about the non-asymptotic behavior of the renormalized MLE and its departure from its limit. We highlight that the results hold whenever the score and its derivative converge, including to non Gaussian limits.