SummaryTechniques of artificial intelligence (AI) are increasingly used in the treatment of patients, such as providing a diagnosis in radiological imaging, improving workflow by triaging patients or providing an expert opinion based on clinical symptoms; however, such AI techniques also hold intrinsic risks as AI algorithms may point in the wrong direction and constitute a black box without explaining the reason for the decision-making process.This article outlines a case where an erroneous ChatGPT diagnosis, relied upon by the patient to evaluate symptoms, led to a significant treatment delay and a potentially life-threatening situation. With this case, we would like to point out the typical risks posed by the widespread application of AI tools not intended for medical decision-making.