2023
DOI: 10.3389/fmed.2023.1305756
|View full text |Cite
|
Sign up to set email alerts
|

Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review

Clara Cestonaro,
Arianna Delicati,
Beatrice Marcante
et al.

Abstract: Artificial intelligence (AI) in medicine is an increasingly studied and widespread phenomenon, applied in multiple clinical settings. Alongside its many potential advantages, such as easing clinicians’ workload and improving diagnostic accuracy, the use of AI raises ethical and legal concerns, to which there is still no unanimous response. A systematic literature review on medical professional liability related to the use of AI-based diagnostic algorithms was conducted using the public electronic database PubM… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(7 citation statements)
references
References 41 publications
0
7
0
Order By: Relevance
“…Of course, if a patient takes part in a research protocol evaluating sensors or AI, ethical guidelines must be followed, and informed consent might need to be signed by the patients after ethical approval of the study by an ad hoc committee. It should be highlighted that in order to obtain an informed consent in the context of sensors and AI, patients should be sufficiently informed to “understand risk, benefits and limitations of sensors and AI software” to be able to give consent to their use [ 47 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Of course, if a patient takes part in a research protocol evaluating sensors or AI, ethical guidelines must be followed, and informed consent might need to be signed by the patients after ethical approval of the study by an ad hoc committee. It should be highlighted that in order to obtain an informed consent in the context of sensors and AI, patients should be sufficiently informed to “understand risk, benefits and limitations of sensors and AI software” to be able to give consent to their use [ 47 ].…”
Section: Discussionmentioning
confidence: 99%
“…For example, when it is used only as a decision support, the radiologist who makes the final determination would be the one bearing the liability risk. However, when the sensor/AI algorithm acts autonomously, it could be “considered analogous to an employee of a facility, its negligence could be attributed to its supervising radiologist or to the institution” [ 47 ]. Of course, similarly to the example provided just before in telemedicine, a radiologist would be held liable if he/she had the chance to review the report and to detect errors and thus, the patient’s injury might have been prevented.…”
Section: Discussionmentioning
confidence: 99%
“…Upholding ethical principles of transparency, accountability, and patient-centered care is essential. This ensures that the integration of AI in ophthalmology respects and protects patient privacy rights while harnessing the potential of data-driven innovations to improve clinical outcomes and quality of care [ 37 , 38 ]. Through proactive measures and adherence to ethical guidelines, healthcare professionals can navigate the complex landscape of AI-driven healthcare while prioritizing patient safety and well-being.…”
Section: Reviewmentioning
confidence: 99%
“…Furthermore, other papers have focused on the application of AI for the definition of the post-mortem interval (PMI) [37]. Finally, in medico-legal sciences, AI could play a pivotal role in the management of medical liability [38,39].…”
Section: Introductionmentioning
confidence: 99%