Machine learning techniques are increasingly gaining attention due to their widespread use in various disciplines across academia and industry. Despite their tremendous success, many such techniques suffer from the "black-box" problem, which refers to situations where the data analyst is unable to explain why such techniques arrive at certain decisions. This problem has fuelled interest in Explainable Artificial Intelligence (XAI), which refers to techniques that can easily be interpreted by humans. Unfortunately, many of these techniques are not suitable for tabular data, which is surprising given the importance and widespread use of tabular data in critical applications such as finance, healthcare, and criminal justice. Also surprising is the fact that, despite the vast literature on XAI, there are still no survey articles to date that focus on tabular data. Consequently, despite the existing survey articles that cover a wide range of XAI techniques, it remains challenging for researchers working on tabular data to go through all of these surveys and extract the techniques that are suitable for their analysis. Our article fills this gap by providing a comprehensive and up-to-date survey of the XAI techniques that are relevant to tabular data. Furthermore, we categorize the references covered in our survey, indicating the type of the model being explained, the approach being used to provide the explanation, and the XAI problem being addressed. Our article is the first to provide researchers with a map that helps them navigate the XAI literature in the context of tabular data.
INDEX TERMSBlack-box models, Explainable Artificial Intelligence, Machine Learning, Model interpretability VOLUME 4, 2016This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2021.3116481, IEEE Access M. Sahakyan et al.: Explainable Artificial Intelligence for Tabular Data: A Survey• Decomposability, reflecting the degree to which each input, parameter, and calculation can be explained intuitively;• Algorithmic transparency, reflecting the degree to which the inner workings of the learning algorithm can be understood.For example, rule-based models [16] are considered transparent since they use a series of if-then rules that can easily be understood without the need for any further explanation. Unlike transparent models, black-box models are those that do not explain their predictions in a way that humans can understand [17]. Examples of black-box models include artificial neural networks [18] and gradient boosting [19]. Although black-box models are hard to interpret by humans, they tend to have higher prediction accuracy compared to their transparent counterparts. This trade-off between accuracy and transparency gives raise to the black-box explanation problem, which involves explaining the rationale behind the decisions made by black-box models. By providing such explanations, one can continue ...