2023
DOI: 10.3390/asi6010026
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation Metrics Research for Explainable Artificial Intelligence Global Methods Using Synthetic Data

Abstract: In recent years, artificial intelligence technologies have been developing more and more rapidly, and a lot of research is aimed at solving the problem of explainable artificial intelligence. Various XAI methods are being developed to allow the user to understand the logic of how machine learning models work, and in order to compare the methods, it is necessary to evaluate them. The paper analyzes various approaches to the evaluation of XAI methods, defines the requirements for the evaluation system and sugges… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0
1

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 31 publications
0
3
0
1
Order By: Relevance
“…• Oblizanov, A. et al, 2023 This research explores evaluation metrics for explainable AI global methods using synthetic data, shedding light on the challenges and advancements in assessing the performance of interpretable models and contributing to the understanding of their effectiveness and limitations. The authors propose the evaluation methods must be based on accuracy features, must have stable distribution and must be instance-guided.…”
Section: Local Interpretable Model-agnostic Explanationsmentioning
confidence: 99%
“…• Oblizanov, A. et al, 2023 This research explores evaluation metrics for explainable AI global methods using synthetic data, shedding light on the challenges and advancements in assessing the performance of interpretable models and contributing to the understanding of their effectiveness and limitations. The authors propose the evaluation methods must be based on accuracy features, must have stable distribution and must be instance-guided.…”
Section: Local Interpretable Model-agnostic Explanationsmentioning
confidence: 99%
“…A partir da ideia de um servic ¸o centralizador [Barreto 2016], propõe o H-KaaS (Health Knowledge as a Service), um servic ¸o de KaaS dedicado ao domínio da saúde, adaptado a partir da definic ¸ão de [Xu and Zhang 2016], que define os principais com-ponentes da arquitetura dedicada a saúde, que são eles: detentores de dados, servic ¸o provedor de conhecimento e consumidores de conhecimento. XAI se refere ao conjunto de técnicas que visa prover ao usuário um melhor entendimento do processo decisório dos modelos e de como foram obtidos os resultados e as conclusões [Oblizanov et al 2023]. A ausência de transparência, confianc ¸a e interpretabilidade é uma das principais barreiras para a adoc ¸ão de modelos de aprendizado de máquina na área médica.…”
Section: Fundamentac ¸ãO Teóricaunclassified
“…CHAIN_APPROX_SIMPLE parameter (Figure 3b), all boundary points are preserved. It removes all unnecessary points, compresses the contour, and also saves memory [31].…”
Section: Algorithm For Contour Analysis Of the Image Of A Cut Of A Gr...mentioning
confidence: 99%