Our concept of causability is a measure of whether and to what extent humans can understand a given machine explanation.We motivate causability with a clinical case from cancer research.We argue for using causability in medical artificial intelligence (AI) to develop and evaluate future human-AI interfaces.A chieving human-level artificial intelligence (AI) has been the ambition since the emergence of this field. Because of the availability of big data and the necessary computing power, statistical machine learning, especially deep learning, has now made tremendous progress, even in domains as complex as medicine. For example, the work of the Stanford machine learning group on dermatology 1 was popularized in Europe as "AI is better than doctors." The group trained a deep learning model directly from dermatological images by using only pixels and disease labels as the inputs for the classification of skin lesions. For pretraining, they used 1.3 million images from the 2014 ImageNet challenge and then 130,000 clinical images with about 2,000 different diseases. The results, with an average classification performance of 92%, were on par with, or even better than, those of human dermatologists. This is a remarkable achievement, and there is no question that AI will become very important for medicine.Despite these impressive successes, we must be aware that these previous approaches rely on statistical model-free learning. Relying solely on statistical correlations can be very dangerous, especially in medicine, because correlation must not be confused with causality, which is completely missing in current AI. This is a general