The growth in the publication of clinical prediction models (CPMs) has been exponential, largely as a result of an ever-increasingavailabilityofclinicaldata,inexpensivecomputational power, and an expanding tool kit for constructing predictive algorithms. Such an abundance of CPMs has led to an overcrowded, confusing landscape in which it is difficult to identify and select the best, most useful models. 1 Few models are externally validated by the same researchers who developed them, and even fewer by independentinvestigators.Only592(43.3%)of1366cardiovascular CPMs in the Tufts PACE Clinical Predictive Model Registry reported at least 1 validation. 2 The proportions of models in the Tufts registry that reported at least 2, 3, and 10 validations were 20.1%, 12.8%, and 2.9%, respectively. 2 A few select CPMs, such as the Framingham Risk Score and EuroSCORE, have had numerous validations.However, even these models are subject to modifications (eg, adding or removing a predictor variable), with the resulting modified model not revalidated externally. Fragmented efforts that assess only one model at a time do not allow for reliable ranking of the comparative performance of the many CPMs available for the same clinical application. A small number of VIEWPOINT