Transparency in Machine Learning (ML), including interpretable or explainable ML, attempts to reveal the working mechanisms of complex models, including deep neural networks. Transparent ML promises to advance human factors engineering goals of human-centered AI, such as increasing trust or reducing automation bias, in the target users. However, a core requirement in achieving these aims is that the method is indeed transparent to the users. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i. e., a relationship between algorithm and user; as a result, iterative prototyping and evaluation with users is critical to attaining adequate solutions that afford transparency. However, following human-centered design principles in highly specialized and high stakes domains, such as healthcare and medical image analysis, is challenging due to the limited availability of and access to end users, such as domain experts, providers, or patients. This dilemma is further exacerbated by the high knowledge imbalance between ML designers and those end users, which implies an even greater need for iterated development.To investigate the state of transparent ML in medical image analysis, we conducted and now present a systematic review of the literature. Our review reveals multiple severe shortcomings in the design and validation of transparent ML for medical image analysis applications. We find that most studies to date approach transparency as a property of the model itself, similar to task performance, without considering end users during neither development nor evaluation. Despite the considerable difference between the roles and knowledge of ML developers and clinical stakeholders, no study reported formative user research to inform the design and development of transparent ML models; moreover, only a few studies validated transparency claims through empirical user evaluations. The oversight of considering ML transparency as a relationship with end users, the lack of user research, and the sporadic validation of transparency claims put contemporary research on transparent ML for medical image analysis at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research while acknowledging the challenges of human-centered design in healthcare, we introduce the INTRPRT guideline, a systematic design directive for transparent ML systems in medical image analysis. To bridge the disconnect between ML system designers and end users and avoid misspending costly development efforts on models that are finally validated as non-transparent to end users, the INTRPRT guideline suggests formative user research as the first step of transparent model design to understand user needs and domain requirements. Following this process produces evidence to support * Joint first authors.Preprint. Under review.