2022
DOI: 10.3390/data7070093
|View full text |Cite
|
Sign up to set email alerts
|

The Role of Human Knowledge in Explainable AI

Abstract: As the performance and complexity of machine learning models have grown significantly over the last years, there has been an increasing need to develop methodologies to describe their behaviour. Such a need has mainly arisen due to the widespread use of black-box models, i.e., high-performing models whose internal logic is challenging to describe and understand. Therefore, the machine learning and AI field is facing a new challenge: making models more explainable through appropriate techniques. The final goal … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 112 publications
0
10
0
Order By: Relevance
“…While they lack a formal mathematical definition, efforts have been made to differentiate these two concepts [32,33]. Explainability refers to the ability to communicate with humans in understandable terms [34], whereas interpretability concerns the ability to comprehend the reasoning behind a model's outputs [35]. Explainable AI (XAI) is used in health care to communicate transparent and understandable automated decision-making to impacted patients [36].…”
Section: Explainable and Interpretable Aimentioning
confidence: 99%
“…While they lack a formal mathematical definition, efforts have been made to differentiate these two concepts [32,33]. Explainability refers to the ability to communicate with humans in understandable terms [34], whereas interpretability concerns the ability to comprehend the reasoning behind a model's outputs [35]. Explainable AI (XAI) is used in health care to communicate transparent and understandable automated decision-making to impacted patients [36].…”
Section: Explainable and Interpretable Aimentioning
confidence: 99%
“…First, there is an indication to fully exploit the research findings of security vulnerability coordination as a way of accommodating AI technologies for connecting with new vulnerabilities. With AI increasingly being used in real-world systems, developing strategies to understand and resolve its inevitable peculiar security implications constitutes one of the most pressing issues [11], [12]. Red teaming presents a second area of benefit: improving red teaming capabilities is an effective approach.…”
Section: Expanding Security Coordination and Red Teamingmentioning
confidence: 99%
“…Teams must steer by experimentation to 'fingerprint' such interdependencies, on the one hand, and devise contingent plans for unexpected behaviors caused by changes within systems. This throws light on the need for a non-linear perspective analysis that embodies the complex interdependencies within the AI [10], [11]. In the process of organizing their utilization in high-risk environments such as managing under-control highway systems or power grids, it should ensure the adaptability and safety of AI Systems.…”
Section: Interdependence and Experimentationmentioning
confidence: 99%
“…As previously explained, XAI is an AI model with more transparent processes than deep learning [7]. The decision-making process in XAI is also based on mathematical calculations that are easier for users to understand [11].…”
Section: Stroke Prediction Using Explainable Artificial Intelligence ...mentioning
confidence: 99%
“…It cannot be denied that the Deep Learning used in previous studies has a high level of accuracy. However, this high level of accuracy is not accompanied by the transparency of data processing in Deep Learning [7]. Data processing that occurs in deep learning tends to be difficult for users to understand because of the complex architecture of its constituents [8].…”
Section: Introductionmentioning
confidence: 99%