<div>After reviewing the current state of explainable Artificial Intelligence (XAI) capabilities in artificial Intelligence (AI) systems developed for critical domains like criminology, engineering, governance, health, law and psychology, this paper proposes a domain-independent Accountable explainable Artificial Intelligence (AXAI) capability framework. The proposed AXAI framework extends the XAI capability to let AI systems share their decisions and adequately explain the underlying reasoning processes. The idea is to help AI system developers overcome algorithmic biases and system limitations through incorporation of domain independent AXAI capabilities. Moreover, existing XAI methods would neither separate nor quantify measures of comprehensibility, accuracy and accountability so incorporating and assessing XAI capabilities remains difficult. Assessment of the AXAI capabilities of two AI systems in this paper demonstrates that the proposed AXAI framework facilitates separation and measurement of comprehensibility, predictive accuracy and accountability. The AXAI framework allows for the delineation of AI systems in a three-dimensional AXAI space. It measures comprehensibility as the readiness of a human to apply the acquired knowledge. The system accuracy is measured as the ratio of the test and training data, training data size and the observed number of false-positive inferences. Finally, the AXAI framework measures accountability in terms of the inspect ability of the input cues, the processed data and the output information, for addressing any legal and ethical issues.</div>
<div>After reviewing the current state of explainable Artificial Intelligence (XAI) capabilities in artificial Intelligence (AI) systems developed for critical domains like criminology, engineering, governance, health, law and psychology, this paper proposes a domain-independent Accountable explainable Artificial Intelligence (AXAI) capability framework. The proposed AXAI framework extends the XAI capability to let AI systems share their decisions and adequately explain the underlying reasoning processes. The idea is to help AI system developers overcome algorithmic biases and system limitations through incorporation of domain independent AXAI capabilities. Moreover, existing XAI methods would neither separate nor quantify measures of comprehensibility, accuracy and accountability so incorporating and assessing XAI capabilities remains difficult. Assessment of the AXAI capabilities of two AI systems in this paper demonstrates that the proposed AXAI framework facilitates separation and measurement of comprehensibility, predictive accuracy and accountability. The AXAI framework allows for the delineation of AI systems in a three-dimensional AXAI space. It measures comprehensibility as the readiness of a human to apply the acquired knowledge. The system accuracy is measured as the ratio of the test and training data, training data size and the observed number of false-positive inferences. Finally, the AXAI framework measures accountability in terms of the inspect ability of the input cues, the processed data and the output information, for addressing any legal and ethical issues.</div>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.