“…Additionally, the interpretability of deep learning models, in general, remains a challenge [37, 38]. Many of these studies do not discuss explanation in their works [25, 26, 28, 29, 32, 33], some employ gradient-based [23, 27, 34] and permutation-based [24] methods to generate an importance heatmap as the models’ explanation, and others conduct post-hoc analyses on learned graph features to obtain some clinical implications from their models [30, 31]. These explanations, although providing some insights into learned information, might not be clear, intuitive, or meaningful, especially for clinical experts and cancer researchers.…”