“…Developing AI systems for healthcare is a complex space with many, wide-ranging sociotechnical challenges [3,9,46,60,150], spanning: (i) concerns about patient autonomy and ability to explicitly consent or withdraw from healthcare data uses, and its privacy protection in AI development or use [123,134]; (ii) investigations into AI workflow integration [9,21,27] and how best to configure clinician-AI relationships to effectively empower care providers [50,54,125,141,147]; as well as (iii) challenges around acceptance, trust and adoption of AI insights into clinical practice [52,60,86,114,139]. This is mostly addressed in the field of eXplainable AI (XAI) through research into AI transparency via explanations and other mechanisms to help clinicians contest [53] or learn about AI outputs [24] to be able to develop an appropriate mental model of AI capabilities and their limitations.…”