The debate over human visual perception and how medical images should be interpreted have persisted since X-rays were the only imaging technique available. Concerns over rates of disagreement between expert image readers are associated with much of the clinical research and at times driven by the belief that any image endpoint variability is problematic. The deeper understanding of the reasons, value, and risk of disagreement are somewhat siloed, leading, at times, to costly and risky approaches, especially in clinical trials. Although artificial intelligence promises some relief from mistakes, its routine application for assessing tumors within cancer trials is still an aspiration. Our consortium of international experts in medical imaging for drug development research, the Pharma Imaging Network for Therapeutics and Diagnostics (PINTAD), tapped the collective knowledge of its members to ground expectations, summarize common reasons for reader discordance, identify what factors can be controlled and which actions are likely to be effective in reducing discordance. Reinforced by an exhaustive literature review, our work defines the forces that shape reader variability. This review article aims to produce a singular authoritative resource outlining reader performance’s practical realities within cancer trials, whether they occur within a clinical or an independent central review.