The coefficient of determination, known as R 2 , is commonly used as a goodness-of-fit criterion for fitting linear models. R 2 is somewhat controversial when fitting nonlinear models, although it may be generalised on a case-by-case basis to deal with specific models such as the logistic model. Assume we are fitting a parametric distribution to a data set using, say, the maximum likelihood estimation method. A general approach to measure the goodness-of-fit of the fitted parameters, which is advocated herein, is to use a nonparametric measure for comparison between the empirical distribution, comprising the raw data, and the fitted model. In particular, for this purpose we put forward the Survival Jensen-Shannon divergence (SJS) and its empirical counterpart (ESJS) as a metric which is bounded, and is a natural generalisation of the Jensen-Shannon divergence. We demonstrate, via a straightforward procedure making use of the ESJS, that it can be used as part of maximum likelihood estimation or curve fitting as a measure of goodness-of-fit, including the construction of a confidence interval for the fitted parametric distribution. Furthermore, we show the validity of the proposed method with simulated data, and three empirical data sets.for nonlinear models, [5], where the author recommends to define R 2 as a comparison of a given model to the null model, claiming that this view allows for the generalisation of R 2 . Further, in [6] the inappropriateness of R 2 for nonlinear models is clearly demonstrated via a series of Monte Carlo simulations. In [7], a novel R 2 measure based on the Kullback-Leibler divergence [8] was proposed as a measure of goodness-of-fit for regression models in the exponential family. In addition, in [9] problems with using R 2 for assessing goodnessof-fit in linear mixed models with random effects were highlighted, and in [10] an improved extension was proposed in the context of both linear and generalised linear mixed models. Despite numerous proposals to address the issues with R 2 for nonlinear models, many of them are ad-hoc, and, as noted in [11], should be applied with caution. In summary, there seems to be a lack of general purpose goodness-of-fit measures that could be applied to nonlinear models, which is the main issue we attempt to redress with the ESJS.Alternative nonparametric methods have also been proposed. In particular, the Akaike information criterion (AIC) and its counterpart the Bayesian information criterion (BIC) [12,13], are widely used estimators for model selection. Both AIC and BIC are asymptotically valid maximum likelihood estimators, with penalty terms to discourage overfitting. We stress that goodness-of-fit measures how well a single model fits the observed data, while model selection compares the predictive accuracy of two models relative to each other [12,14]. The likelihood ratio test is also an established method for model selection between a null model and an alternative maximum likelihood model [15,16]. Despite the popularity of maximum likelihood meth...