Explainable Artificial Intelligence (XAI) is an area of growing interest, particularly in medical imaging, where example-based techniques show great potential. This paper is a systematic review of recent example-based XAI techniques, a promising approach that remains relatively unexplored in clinical practice and medical image analysis. A selection and analysis of recent studies using example-based XAI techniques for interpreting medical images was carried out. Several approaches were examined, highlighting how each contributes to increasing accuracy, transparency, and usability in medical applications. These techniques were compared and discussed in detail, considering their advantages and limitations in the context of medical imaging, with a focus on improving the integration of these technologies into clinical practice and medical decision-making. The review also pointed out gaps in current research, suggesting directions for future investigations. The need to develop XAI methods that are not only technically efficient but also ethically responsible and adaptable to the needs of healthcare professionals was emphasised. Thus, the paper sought to establish a solid foundation for understanding and advancing example-based XAI techniques in medical imaging, promoting a more integrated and patient-centred approach to medicine.