: Mistrust, amplified by numerous artificial intelligence (AI) related incidents, has caused the energy and industrial sectors to be amongst the slowest adopter of AI methods. Central to this issue is the black-box problem of AI, which impedes investments and fast becoming a legal hazard for users. Explainable AI (XAI) is a recent paradigm to tackle this challenge. Being the backbone of the industry, the prognostic and health management (PHM) domain has recently been introduced to XAI. However, many deficiencies, particularly lack of explanation assessment methods and uncertainty quantification, plague this young field. In this paper, we elaborate a framework on explainable anomaly detection and failure prognostic employing a Bayesian deep learning model to generate local and global explanations from the PHM tasks. An uncertainty measure of the Bayesian model is utilized as marker for anomalies expanding the prognostic explanation scope to include model’s confidence. Also, the global explanation is used to improve prognostic performance, an aspect neglected from the handful of PHM-XAI publications. The quality of the explanation is finally examined employing local accuracy and consistency properties. The method is tested on real-world gas turbine anomalies and synthetic turbofan data failure prediction. Seven out of eight of the tested anomalies were successfully identified. Additionally, the prognostic outcome showed 19% improvement in statistical terms and achieved the highest prognostic score amongst best published results on the topic.