2020
DOI: 10.48550/arxiv.2005.01908
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A multi-component framework for the analysis and design of explainable artificial intelligence

Abstract: The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, which have created high expectations for industrial, commercial and social value.Second, the emergence of concern for creating trusted AI systems, including the creation of regulatory principles to ensure transparency and trust of AI systems.These two threads have created a kind … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…Here we discuss the performance of our models based on two broad categories of performance measures, such as, (1) accuracy and (2) explainability [2] as follows, Depression Symptoms Detection (DSD) Task Accuracy We observe that NLI models are the best and Sentence Embedding Familty (SEF) models are on par indicates, NLI and sentence embedding models with their semantic similarity pre-training, are inherently better in ZSL tasks. With respect to top-k we can see gradual decrease of Micro-F1 with the increase of top-k with pick at 1, see Figure 3.…”
Section: Results Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Here we discuss the performance of our models based on two broad categories of performance measures, such as, (1) accuracy and (2) explainability [2] as follows, Depression Symptoms Detection (DSD) Task Accuracy We observe that NLI models are the best and Sentence Embedding Familty (SEF) models are on par indicates, NLI and sentence embedding models with their semantic similarity pre-training, are inherently better in ZSL tasks. With respect to top-k we can see gradual decrease of Micro-F1 with the increase of top-k with pick at 1, see Figure 3.…”
Section: Results Discussionmentioning
confidence: 99%
“…Most of the earlier work in text based Depression classification can be divided into two broad categories such as, (1) Post level signs of Depression detection [23,5] and (2) User-level signs of Depression detection [24,8]. It is to be noted that task (1) is often an important pre-requisite of task (2). Even importantly, for clinically meaningful user-level signs of Depression detection, we need to have models that can identify post level signs of clinical Depression symptoms.…”
Section: Earlier Workmentioning
confidence: 99%
“…Accordingly, the quality of explanations depends on how the stakeholder/audience perceives them depending on their background, goals, expectations, context etc. (Atakishiyev et al, 2020). Hence, based on (Swartout & Moore, 1993;Walsh et al, 2021), the following criteria are considered to evaluate explanations of AI models:…”
Section: Explainable Ai Methods and Their Evaluationmentioning
confidence: 99%
“…This concept of interpretability representing models that can be decomposed by an appropriately skilled audience is becoming more widely recognised amongst contemporary authors. Specifically, authors identify linear models, decision trees, rule-based models and constrained variants of black-box models as interpretable models [21]- [25]. Such models are often referred to as "inherently" interpretable [22], [23], [26], [27] or "intrinsic" [28], with the advantage that they are able to provide accurate and undistorted [26] explanations for the model output.…”
Section: Terminologymentioning
confidence: 99%
“…This definition brings together four key elements of context frequently discussed in the literature. First it leverages the centrality of the audience [25], [87]- [90] since it is the audience who determines whether the explanation is a good one or not [16]. Second it captures the importance of understanding the goals of the audience, [85], [91] since it is the goals that drive the ML model design [85], [91], [92].…”
Section: A Explanations Within a Specific Contextmentioning
confidence: 99%