The implementation of AI assisted cancer detection systems in clinical environments has faced numerous hurdles, mainly because of the restricted explainability of their elemental mechanisms, even though such detection systems have proven to be highly effective. Medical practitioners are skeptical about adopting AI assisted diagnoses as due to the latter’s inability to be transparent about decision making processes. In this respect, explainable artificial intelligence (XAI) has emerged to provide explanations for model predictions, thereby overcoming the computational black box problem associated with AI systems. In this particular research, the focal point has been the exploration of the Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) approaches which enable model prediction explanations. This study used an ensemble model consisting of three convolutional neural networks(CNN): InceptionV3, InceptionResNetV2 and VGG16, which was based on averaging techniques and by combining their respective predictions. These models were trained on the Kvasir dataset, which consists of pathological findings related to gastrointestinal cancer. An accuracy of 96.89% and F1-scores of 96.877% were attained by our ensemble model. Following the training of the ensemble model, we employed SHAP and LIME to analyze images from the three classes, aiming to provide explanations regarding the deterministic features influencing the model’s predictions. The results obtained from this analysis demonstrated a positive and encouraging advancement in the exploration of XAI approaches, specifically in the context of gastrointestinal cancer detection within the healthcare domain.