2019
DOI: 10.6025/jic/2019/10/4/121-127
|View full text |Cite
|
Sign up to set email alerts
|

Visualization of Explanations of Incremental Models

Abstract: The temporal dimension that is ever more prevalent in data makes the data stream mining (incremental learning) an important field of machine learning. In addition to accurate predictions, explanations of models and examples are a crucial component as they provide insight into model's decision and lessen its black box nature, thus increasing the user's trust. Proper visual representation of data is also very relevant to user's understanding -visualization is often utilised in machine learning since it shifts th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…Previous studies have shown that local data-centric explanations are more effective than local model-centric explanations for increasing the trust and understandability of prediction models by justifying predicted outcomes with reference to the training data [8,21]. However, these studies have focused on the efficacy of these explanations solely in the context of prediction justification of individual data instances (i.e., local explanations) rather than the working of the whole model (i.e., global explanations).…”
Section: Xai Methods For ML Systemsmentioning
confidence: 99%
“…Previous studies have shown that local data-centric explanations are more effective than local model-centric explanations for increasing the trust and understandability of prediction models by justifying predicted outcomes with reference to the training data [8,21]. However, these studies have focused on the efficacy of these explanations solely in the context of prediction justification of individual data instances (i.e., local explanations) rather than the working of the whole model (i.e., global explanations).…”
Section: Xai Methods For ML Systemsmentioning
confidence: 99%