2022
DOI: 10.1109/access.2022.3208957
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Cluster Analysis With Explainable AI and Multidimensional Cluster Prototypes

Abstract: Explainable Artificial Intelligence (XAI) aims to introduce transparency and intelligibility into the decision-making process of AI systems. Most often, its application concentrates on supervised machine learning problems such as classification and regression. Nevertheless, in the case of unsupervised algorithms like clustering, XAI can also bring satisfactory results. In most cases, such application is based on the transformation of an unsupervised clustering task into a supervised one and providing generalis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…The study participants found that the ClAMPs method enables better descriptions of clusters and helps in understanding clusters well, particularly when applied to artificially generated datasets. This means it is potentially applicable to diverse model types [30].…”
Section: Clampsmentioning
confidence: 99%
See 1 more Smart Citation
“…The study participants found that the ClAMPs method enables better descriptions of clusters and helps in understanding clusters well, particularly when applied to artificially generated datasets. This means it is potentially applicable to diverse model types [30].…”
Section: Clampsmentioning
confidence: 99%
“…ClAMPs and TNTRules seek to explain model predictions; nevertheless, some constraints and difficulties might not be easily accessible [30]. Although it depends on locating optimum anchors, which may be computationally costly, OAK4XAI seeks to develop succinct and understandable anchors using game theory techniques [33].…”
Section: Challengesmentioning
confidence: 99%
“…They also identified open research areas for types of explanations and for assessing explanations and interpretability. Bobek et al [37] focused on the XAI problem in unsupervised ML. They observed that global explanations might be overly broad, whereas local explanations based on the centers of gravity may overlook valuable information regarding the shape and distribution of clusters.…”
Section: Related Workmentioning
confidence: 99%