2022
DOI: 10.48550/arxiv.2204.01888
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ConceptExplainer: Understanding the Mental Model of Deep Learning Algorithms via Interactive Concept-based Explanations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…Additionally, users could define custom difference metrics [51] to study desired behavior. Although ProactiV supports analysis of local model behavior with the instance optimization summary and input parameter distributions, this analysis can be augmented with advanced concept-based or semantic analysis [25], [26]. Such an analysis could support a more detailed analysis of image semantics.…”
Section: Discussion and Future Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Additionally, users could define custom difference metrics [51] to study desired behavior. Although ProactiV supports analysis of local model behavior with the instance optimization summary and input parameter distributions, this analysis can be augmented with advanced concept-based or semantic analysis [25], [26]. Such an analysis could support a more detailed analysis of image semantics.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Concept-based analysis has been proposed to support the identification of failure patterns. Methods tag images with human-interpretable concept(s), like "shadow", either interactively [25] or via automatic concept-based explanations [26]. ConceptExplainer [26] further supported navigating the complex concept space for finer model behavior analysis, revealing semantic causes of failure.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations