In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.
The neurokinin-1-receptor antagonist L-754,030 prevents delayed emesis after treatment with cisplatin. Moreover, combining L-754,030 with granisetron plus dexamethasone improves the prevention of acute emesis.
A single IV dose of dolasetron mesylate (1.8 or 2.4 mg/kg) has comparable safety and efficacy to a single 32-mg IV dose of ondansetron in patients receiving cisplatin chemotherapy.
Anesthetics, and even minimal residual neuromuscular blockade, may lead to upper airway obstruction (UAO). In this study we assessed by spirometry in patients with a train-of-four (TOF) ratio >0.9 the incidence of UAO (i.e., the ratio of maximal expiratory flow and maximal inspiratory flow at 50% of vital capacity [MEF50/MIF50] >1) and determined if UAO is induced by neuromuscular blockade (defined by a forced vital capacity [FVC] fade, i.e., a decrease in values of FVC from the first to the second consecutive spirometric maneuver of > or =10%). Patients received propofol and opioids for anesthesia. Spirometry was performed by a series of 3 repetitive spirometric maneuvers: the first before induction (under midazolam premedication), the second after tracheal extubation (TOF ratio: 0.9 or more), and the third 30 min later. Immediately after tracheal extubation and 30 min later, 48 and 6 of 130 patients, respectively, were not able to perform spirometry appropriately because of sedation. The incidence of UAO increased significantly (P < 0.01) from 82 of 130 patients (63%) at preinduction baseline to 70 of 82 patients (85%) after extubation, and subsequently decreased within 30 min to values observed at baseline (80 of 124 patients, 65%). The mean maximal expiratory flow and maximal inspiratory flow at 50% of vital capacity ratio after tracheal extubation was significantly increased from baseline (by 20%; 1.39 +/- 1.01 versus 1.73 +/- 1.02; P < 0.01), and subsequently decreased significantly to values observed at baseline (1.49 +/- 0.93). A statistically significant FVC fade was not present, and a FVC fade of > or =10% was observed in only 2 patients after extubation. Thus, recovery of the TOF ratio to 0.9 predicts with high probability an absence of neuromuscular blocking drug-induced UAO, but outliers, i.e., persistent effects of neuromuscular blockade on upper airway integrity despite recovery of the TOF ratio, may still occur.
For some years, we have been witnessing a steady stream of high‐profile studies about machine learning (ML) algorithms achieving high diagnostic accuracy in the analysis of medical images. That said, facilitating successful collaboration between ML algorithms and clinicians proves to be a recalcitrant problem that may exacerbate ethical problems in clinical medicine. In this paper, we consider different epistemic and normative factors that may lead to algorithmic overreliance within clinical decision‐making. These factors are false expectations, the miscalibration of uncertainties, non‐explainability, and the socio‐technical context within which the algorithms are utilized. Moreover, we identify different desiderata for bridging the gap between ML algorithms and clinicians. Further, we argue that there is an intriguing dialectic in the collaboration between clinicians and ML algorithms. While it is the algorithm that is supposed to assist the clinician in diagnostic tasks, successful collaboration will also depend on adjustments on the side of the clinician.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.