Despite the great promises that artificial intelligence (AI) holds for health care, the uptake of such technologies into medical practice is slow. In this paper, we focus on the epistemological issues arising from the development and implementation of a class of AI for clinical practice, namely clinical decision support systems (CDSS). We will first provide an overview of the epistemic tasks of medical professionals, and then analyse which of these tasks can be supported by CDSS, while also explaining why some of them should remain the territory of human experts. Clinical decision making involves a reasoning process in which clinicians combine different types of information into a coherent and adequate 'picture of the patient' that enables them to draw explainable and justifiable conclusions for which they bear epistemological responsibility. Therefore, we suggest that it is more appropriate to think of a CDSS as clinical reasoning support systems (CRSS). Developing CRSS that support clinicians' reasoning process therefore requires that: (a) CRSSs are developed on the basis of relevant and well-processed data; and (b) the system facilitates an interaction with the clinician. Therefore, medical experts must collaborate closely with AI experts developing the CRSS. In addition, responsible use of an CRSS requires that the data generated by the CRSS is empirically justified through an empirical link with the individual patient. In practice, this means that the system indicates what factors contributed to arriving at an advice, allowing the user (clinician) to evaluate whether these factors are medically plausible and applicable to the patient. Finally, we defend that proper implementation of CRSS allows combining human and artificial intelligence into hybrid intelligence, were both perform clearly delineated and complementary empirical tasks. Whereas CRSSs can assist with statistical reasoning and finding patterns in complex data, it is the clinicians' task to interpret, integrate and contextualize.