Advancement in the collection and storage of data, alongside the modern emphasis of automated decision making have lead to datasets growing exponentially in size and complexity over the last four decades. Lever-aging excessively large data through traditional machine learning can lead to exorbitant run times, storage and general computational bloat, with the trained model potentially being sub-optimal (Kohavi & John 1997). Dimensionality reduction through selection and extraction are common methods of mitigating these issues. Extraction methods map the existing data to lower dimensional space whilst attempting to maintain the characteristics of the original dataset, whilst selection methods attempt to take a representative subset of the data. Alongside elevating the technical computational bloat, data reduction provides a parsimonious representation of the dataset, resulting in comparably simpler models which are more intrinsically interpretable. Therefore, data reduction techniques are included within Explainable Artificial Intelligence (XAI) (Barredo Arrieta et al. 2020). With the increasing reliance on automated decision making, the number of publications related to XAI has increased rapidly over the last ten years. A machine learning model should be not only accurate, but also transparent with an interpretable logic for proposed predictions. Therefore, machine learning models are used for both predictive purposes and retrospective data exploration and analysis.