Artificial intelligence (AI) enables machines to learn from human experience, adjust to new inputs, and perform human-like tasks. AI is progressing rapidly and is transforming the way businesses operate, from process automation to cognitive augmentation of tasks and intelligent process/data analytics. However, the main challenge for human users would be to understand and appropriately trust the result of AI algorithms and methods. In this paper, to address this challenge, we study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools. We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance. We present an interactive evidence-based approach to assist human users in comprehending and trusting the results and output created by AI-enabled algorithms. We adopt a typical scenario in the Banking domain for analyzing customer transactions. We develop a digital dashboard to facilitate interacting with the algorithm results and discuss how the proposed XAI method can significantly improve the confidence of data scientists in understanding the result of AI-enabled algorithms.
KEYWORDS Business Process Analytics; Explainable AI; Machine learning 1 INTRODUCTIONThis section present an overview of the completed study and explains the author's rationale to conduct this research. We will go through the problem we are addressing and discuss the impacts of the proposed approach for the business organizations. Furthermore, we present the project's contribution to research and academia to meet the goals and its response to the demands of the finance industry.
OverviewIn the last decade, the world has envisioned tremendous growth in technology with the improved accessibility of data, cloud resources, and machine learning (ML) algorithms evolution. The intelligent system has achieved significant performance with this growth. The super performance of these algorithms in various domains has increased the popularity of artificial intelligence (AI). However, alongside these achievements, the non-transparent, ambiguity, and inability to expound and interpret the majority of the state-of-the-art techniques are considered ethical issues. These flaws in AI algorithms impede the acceptance of complex ML models in a variety of fields such as medical, banking and finance, security, and education and have prompted many concerns about the security and safety of ML system users. According to the current regulations and policies, these systems must be transparent to meet the right to explanation. Due to a lack of trust in existing ML-based systems, explainable artificial intelligence (XAI)-based methods are gaining popularity. Although neither the domain nor the methods are novel, they are gaining popularity due to their ability to unbox the black box.