2021
DOI: 10.1109/tetci.2020.3005682
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI for the Choquet Integral

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 41 publications
0
12
0
Order By: Relevance
“…Most AI and machine learning (ML) systems behave like black boxes as they fail to explain their decisions [11]. Incorporation of XAI in AI and ML systems requires attention to four system features: (i) the quality of inputs and interactions between them, (ii) method of combining the input information, (iii) the quality of the training data and, (iv) levels of trust users put in system decisions [11]. The AXAI capability framework proposed in part one of this paper and its implementation reported in the preceding sections of this part, exploited and relied on these four features.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Most AI and machine learning (ML) systems behave like black boxes as they fail to explain their decisions [11]. Incorporation of XAI in AI and ML systems requires attention to four system features: (i) the quality of inputs and interactions between them, (ii) method of combining the input information, (iii) the quality of the training data and, (iv) levels of trust users put in system decisions [11]. The AXAI capability framework proposed in part one of this paper and its implementation reported in the preceding sections of this part, exploited and relied on these four features.…”
Section: Discussionmentioning
confidence: 99%
“…It is proposed that four system features: the quality of inputs and interactions between them, the method of com-bining the input information, the quality of the training data and, trustworthiness of systems decisions would suffice incorporating the XAI in AI systems [11]. However, real-life use of these features for incorporating XAI in AI systems is not common yet.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, the outputs provided by submodels are always treated equally in these two fusion strategies, which would enhance the contributions of worse outputs and do not explain the interaction between each classifier. Based on the benefits of the fuzzy integral [35], such as the explainable characteristics, it could explain not only the importance of each classifier but also the interaction between classifiers. For example, firstly, as shown in Figure 2, the red line could demonstrate the calculation order of fuzzy integral.…”
Section: Decision-level Fuzzy Fusion Strategymentioning
confidence: 99%
“…Meanwhile, several recent works propose methods of measuring and assessing explainability of machine learning and AI systems (Burkart & Huber, 2021). Nonetheless, recent literature cites AI systems as difficult to understand, adopt and trust (Murray et al, 2020).…”
Section: Introductionmentioning
confidence: 99%