The recent resurgence of Artificial Intelligence (AI), specifically in the context of applications like healthcare, security and defense, IoT, and other areas that have a big impact on human life, has led to a demand for eXplainable AI (XAI). The production of explanations is argued to be a key aspect of achieving goals like trustworthiness and transparent versus opaque AI. XAI is also of fundamental academic interest with respect to helping us identifying weaknesses in the pursuit of making better AI. Herein, I focus on one piece of the AI puzzle, information fusion. In this work, I propose XAI fusion indices, linguistic summaries (aka textual explanations) of these indices, and local explanations for the fuzzy integral. However, a limitation of these indices is its tailored to highly educated fusion experts, and it is not clear what to do with these explanations. Herein, I extend the introduced indices to actionable explanations, which are demonstrated in the context of two case studies; multi-source fusion and deep learning for remote sensing. This work ultimately shows what XAI for fusion is and how to create actionable insights.