The increased utilization of disruptive health and biomedical informatics technologies, such as artificial intelligence (AI), has accelerated medical operations from patient-centered medical experience data management to simplified medical procedures in this generative era. As these technologies integrate into traditional approaches, they raise critical medical concerns, entailing transparency and interpretability of these AI models. This study conducts a systematic literature review (SLR) and presents an exhaustive review of the studies using data collection procedures and publicly available academic databases. 1837 articles published between 2014 and 2024 were obtained from eight popular academic databases: PubMed, ACM Library, Springer, Scopus, IEEE Xplore, ScienceDirect, Google Scholar, and Web of Science. A comprehensive screening process was used, and 148 articles were considered based on the relevance of the AI method to healthcare and biomedical. The studied studies demonstrate that the majority of medical people still find it complex to effectively explain the reasoning behind the decisions AI models make during biomedical experiments, leading to limited trust, biased decision-making, and unknown patient data safety. Model-agnostic strategies and explainable AI (XAI) frameworks are inspected, together with crucial datasets for training and assessment. The main challenges are AI model intricacy and regulatory compliance, while future trends highlight fairness and predisposition mitigation. Limited studies are focusing on improving AI openness, trust, and interpretability. Towards the end, it reveals that there is still a big research gap in descriptive explainable AI models in healthcare when integrating AI into clinical practice while maintaining ethical standards and patient-centric care.