“…The least research is in AI Behaviour and Governance where much effort is needed in the future. The methods and tools to support trustworthiness (explainability and other AI traits) in AI for energy systems include, among others, visual explanation techniques using Gradient-weighted Class Activation Mapping (Grad-CAM) (Ardito et al, 2022), sequenceto-sequence RNN methods for visual explanation of short-term load forecasting (Gürses-Tran et al, 2022), the Scale-Invariant Feature Transform (SIFT) method (Singstock et al, 2021), post hoc interpretability (Allen and Tkatchenko, 2022), SHapley Additive exPlanation (SHAP) (Pinson et al, 2021;Abdel-Razek et al, 2022;Kruse et al, 2022), interpretable Tiny Neural Networks (TNN) (Longmire and Banuti, 2022), model-agnostic methods (Gürses-Tran et al, 2022), the use of Temporal Fusion Transformer (TFT) method to enhance interpretability (López Santos et al, 2022), the decision tree and Classification and Regression Tree (CART) algorithms for ML explainability (Sun et al, 2021), visual data exploration for the interpretability of fault diagnosis (Landwehr et al, 2022), a partially interpretable method using Long short-term memory (LSTM) and MLP (multilayer perceptron) for short-term load forecasting (Xie et al, 2021), and Local Interpretable Model-Agnostic Explanation (lime) (Tsoka et al, 2022). We expect that many more methods will be developed for XAI in the future.…”