Background-Binary angiographic and clinical restenosis rates can vary widely between clinical studies, even for the same stent, influenced heavily by case-mix covariates that differ among observational and randomized trials intended to assess a given stent system. We hypothesized that mean in-stent late loss might be a more stable estimator of restenosis propensity across such studies. Methods and Results-In 46 trials of drug-eluting and bare-metal stenting, increasing mean late loss was associated with higher target lesion revascularization (TLR) rates (PϽ0.001). When the class of bare-metal stents was compared with the class of effective drug-eluting stents, late loss was more discriminating than TLR as measured by the high intraclass correlation coefficient () (late loss, ϭ0.71 versus TLR, ϭ0.22; 95% CI of differenceϭ0.33, 0.65). When the individual drug-eluting stents and bare-metal stents were compared, late loss was a better discriminator than TLR (0.68 versus 0.19; 95% CI of differenceϭ0.24, 0.60). Greater adjustments of study covariates are needed to stabilize assessments of TLR compared with late loss because of greater influence of reference vessel diameter on TLR than on in-stent late loss. Optimization of late loss with the use of a novel method of standardization according to diabetes prevalence and mean lesion length resulted in minor adjustments in late loss (Ͻ0.08 mm for 90% of reported trials) and an ordered array of mean late loss values for the stent systems studied. Conclusions-Late loss is more reliable than restenosis rates for discriminating restenosis propensity between new drug-eluting stent platforms across studies and might be the optimum end point for evaluating drug-eluting stents in early, nonrandomized studies.