To paraphrase Yogi Berra and perhaps others, 1 prediction is hard, especially about the future. Chaudhary and colleagues 2 therefore should be commended for producing a moderately accurate prediction model of sustained postoperative opioid use. Consistent with current standards, 3 they transparently reported the model development process and coefficients. They also translated the model into a practical and accessible scoring system, the Stopping Opioids After Surgery (SOS) score, making it more likely to be used for discharge planning. Others who develop prediction models should emulate these features. Clinical prediction models like the SOS score 2 are intended to inform treatment decisions for individual patients. Therefore, the same rules of evidence and skepticism should apply as for all health care interventions. Because of the complex and technical nature of prediction model development and evaluation, being a critical user of these models is often more difficult than producing the models in the first place. Overall accuracy statistics are important, but they are only a small fraction of what the critical user should consider. In this short commentary, I offer only 3 of the many important questions critical users should ask before using or implementing the SOS score or any other clinical prediction model.