Bringing AI technology into clinical practice has proved challenging for system designers and medical professionals alike. The academic literature has, for example, highlighted the dangers of black-box decision-making and biased datasets. Further, end-users’ ability to validate a system’s performance often disappears following the introduction of AI decision-making. We present the MAP model to understand and describe the three stages through which medical observations are interpreted and handled by AI systems. These stages are Measurement, in which information is gathered and converted into data points that can be stored and processed; Algorithm, in which computational processes transform the collected data; and Presentation, where information is returned to the user for interpretation. For each stage, we highlight possible challenges that need to be overcome to develop Human-Centred AI systems. We illuminate our MAP model through complementary case studies on colonoscopy practice and dementia diagnosis, providing examples of the challenges encountered in real-world settings. By defining Human-AI interaction across these three stages, we untangle some of the inherent complexities in designing AI technology for clinical decision-making, and aim to overcome misalignment between medical end-users and AI researchers and developers.