2022
DOI: 10.1109/tai.2021.3133846
|View full text |Cite
|
Sign up to set email alerts
|

Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and Perspectives

Abstract: Artificial intelligence (AI) and Machine Learning (ML) have come a long way from the earlier days of conceptual theories, to being an integral part of today's technological society. Rapid growth of AI/ML and their penetration within a plethora of civilian and military applications, while successful, has also opened new challenges and obstacles. With almost no human involvement required for some of the new decision-making AI/ML systems, there is now a pressing need to gain better insights into how these decisio… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
48
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 106 publications
(48 citation statements)
references
References 125 publications
0
48
0
Order By: Relevance
“…For example, to justify outcomes, or to identify what factors play a role, there may be no need to explain the exact inner workings of the system. Moreover, distinctions have been made regarding the method of explanation (Arrieta et al, 2020;Rawal et al, 2022). Explanations can be textual, and visual; can focus on an, example, counterfactual, or simplification (e.g., using a simplified model); can provide feature relevance, knowledge rules, full system descriptions (global explanations), or provide reasoning for a specific decision made by the system (local explanations).…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%
“…For example, to justify outcomes, or to identify what factors play a role, there may be no need to explain the exact inner workings of the system. Moreover, distinctions have been made regarding the method of explanation (Arrieta et al, 2020;Rawal et al, 2022). Explanations can be textual, and visual; can focus on an, example, counterfactual, or simplification (e.g., using a simplified model); can provide feature relevance, knowledge rules, full system descriptions (global explanations), or provide reasoning for a specific decision made by the system (local explanations).…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%
“…For example, to justify outcomes, or to identify what factors play a role in the AI system, there might be no need to explain the exact details of the inner workings of the system. Moreover, distinctions have been made with respect to the method of explanation (Arrieta et al, 2020;Rawal et al, 2021). Explanations can be distinguished in textual and visual explanations, explanations-by-example, counterfactual explanations, explanation by simplification (e.g., using a simplified model), explanation by providing feature relevance, explanation by knowledge rules, full system descriptions (global explanations), or explanations providing reasoning for a specific case or decision being made by the system (local explanations).…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%
“…Elon Musk, Bill Gates, and Stephen Hawking have mentioned that humans need to be alert to the threat of AI. Imagine how dangerous and terrifying it could be if AI-based systems got out of the control of humans [17], [21], [22].…”
Section: Attacking Artificial Intelligencementioning
confidence: 99%
“…The main issue is about protecting AI itself against the exploitation of vulnerabilities and other threats caused by targeted attacks on AI [21], [22]. Meanwhile, we should prevent AI from harming humans.…”
Section: Protecting Artificial Intelligencementioning
confidence: 99%