2020
DOI: 10.1007/978-3-030-50334-5_4
|View full text |Cite
|
Sign up to set email alerts
|

Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 52 publications
(39 citation statements)
references
References 44 publications
0
39
0
Order By: Relevance
“…XAI has already been applied in different contexts such as health sector [26] or applications in the context of recommender systems [27]. The reasons and motivations for the use of XAI can be described as, for example, explain to justify, control, improve, discover, verify, or manage as well as to comply to legislation [5,28].…”
Section: Deep Learning Algorithms As Black Boxesmentioning
confidence: 99%
“…XAI has already been applied in different contexts such as health sector [26] or applications in the context of recommender systems [27]. The reasons and motivations for the use of XAI can be described as, for example, explain to justify, control, improve, discover, verify, or manage as well as to comply to legislation [5,28].…”
Section: Deep Learning Algorithms As Black Boxesmentioning
confidence: 99%
“…In making decisions between models, it supported both expert and non-expert users while evaluating their confidence and improving untrustworthy models by providing insight into their predictions. Further, in [1], the authors address the effect of explainability on trust in AI and computer vision systems through the improved understandability and predictability of deep learning-based computer vision decisions on medical diagnostic data. They also explore how XAI can be used to compare the recognition techniques of two deep learning models: Multi-Layer Perceptron and Convolutional Neural Network (CNN).…”
Section: Explainable Artificial Intelligence In the Medical Fieldmentioning
confidence: 99%
“…On the other hand, in recent years, deep learning and AI-based extraction of information from images have received growing interest in fields such as medical diagnostics, finance, forensics, scientific research and education. In these domains, it is often necessary to understand the reason for the model's decisions so that the human can validate the decision's outcome [1].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we use two post hoc model agnostic explainability techniques called Local Interpretable Modelagnostic Explanations (LIME) [15,16] and Shapley Additive exPlanations (SHAP) [17,18] to analyze the models on the dataset by checking the evaluation metrics and select the model where explanation can be separated from the models. The intent is to evaluate the black-box model much easily on how each word plays an important role in the prediction of the sarcastic dialogues by the speaker using the sequential nature of a scene in the TV series.…”
Section: Introductionmentioning
confidence: 99%