Artificial Intelligence (AI) techniques are widely being used in the medical fields or
various applications including diagnosis of diseases, prediction and classification of diseases,
drug discovery, etc. However, these AI techniques are lacking in the transparency of the predictions
or decisions made due to their black box-type operations. The explainable AI (XAI)
addresses such issues faced by AI to make better interpretations or decisions by physicians.
This article explores XAI techniques in the field of healthcare applications, including the Internet
of Medical Things (IoMT). XAI aims to provide transparency, accountability, and traceability
in AI-based systems in healthcare applications. It can help in interpreting the predictions
or decisions made in medical diagnosis systems, medical decision support systems, smart
wearable healthcare devices, etc. Nowadays, XAI methods have been utilized in numerous
medical applications over the Internet of Things (IOT), such as medical diagnosis, prognosis,
and explanations of the AI models, and hence, XAI in the context of IoMT and healthcare has
the potential to enhance the reliability and trustworthiness of AI systems.