2020
DOI: 10.6028/nist.ir.8312-draft
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Four Principles of Explainable Artificial Intelligence

Abstract: All comments will be made public and are subject to release under the Freedom of Information Act (FOIA). Additional information on submitting comments can be found at https://www.nist.gov/topics/artificial-intelligence/ai-foundational-research-explainability. Trademark Information All trademarks and registered trademarks belong to their respective organizations. Call for Patent Claims This public review includes a call for information on essential patent claims (claims whose use would be required for complianc… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 86 publications
(52 citation statements)
references
References 56 publications
(82 reference statements)
0
52
0
Order By: Relevance
“…In recognition of the growing importance of this topic, NIST published in August 2020 Four principles of XAI (Phillips et al, 2020), which define the following fundamental principles which an AI must honor to be considered an XAI as follows:…”
Section: Xai Taxonomymentioning
confidence: 99%
“…In recognition of the growing importance of this topic, NIST published in August 2020 Four principles of XAI (Phillips et al, 2020), which define the following fundamental principles which an AI must honor to be considered an XAI as follows:…”
Section: Xai Taxonomymentioning
confidence: 99%
“…Furthermore, the use of innovative tools such as Layerwise Relevance Propagation indicates that the machine learning-based approach is correctly identifying human-made features and activities (e.g., transportation networks or land-cover conversion) providing confidence in, and interpretability of, the CNN’s predictions. This step is especially important if the ml-HFI is to be used for policy decisions as national and international laws have begun to require explainable AI systems for decision-making 41,42 .…”
Section: Discussionmentioning
confidence: 99%
“…2. Trustworthiness: According to NIST [Phi+20], the trustworthiness of an AI application is ultimately derived by its explainability. We attribute greater trust to AI algorithms that are relevant, easy to understand, and not prone to misrepresentation.…”
Section: Purposementioning
confidence: 99%