2021
DOI: 10.1016/j.imavis.2021.104310
|View full text |Cite
|
Sign up to set email alerts
|

Context-based image explanations for deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…The articles with a legal background (circa 2%) cover algorithmic transparency in the European General Data Protection Regulation (GDPR) [24], the applications of XAI on legal text [69], the explanation techniques in law and their applications for machine learning models [70]. In the field of robotics (circa 2%), the considered articles address the subject of explainable reinforcement learning [71,72] as well as categorization of explanatory capabilities and requirements [51]. Examples of other application fields include autonomous driving [50], communication systems and networking [49,73], education [74,75], and social sciences [11].…”
Section: Methodsmentioning
confidence: 99%
“…The articles with a legal background (circa 2%) cover algorithmic transparency in the European General Data Protection Regulation (GDPR) [24], the applications of XAI on legal text [69], the explanation techniques in law and their applications for machine learning models [70]. In the field of robotics (circa 2%), the considered articles address the subject of explainable reinforcement learning [71,72] as well as categorization of explanatory capabilities and requirements [51]. Examples of other application fields include autonomous driving [50], communication systems and networking [49,73], education [74,75], and social sciences [11].…”
Section: Methodsmentioning
confidence: 99%
“…They further mapped these objects to TF-IDF feature vectors and utilized them for training a scene classifier to predict scene categories. As for the explainability aspect of scene classification, the work of Anjomshoae et al [51] is the most related to our approach in providing text-based explanations by using local information (i.e., objects). It generated textual explanations by calculating the contextual importance of each semantic category in the scene by masking that particular segment, and determining its effect on the prediction.…”
Section: Scene Classificationmentioning
confidence: 99%
“…CI expresses the importance of the different feature attributes for a prediction. Apart from being important, we want to know the extent to which the attributes of the different input features are favourable (or not) for a prediction, this is referred to as contextual utility [25].…”
Section: Explanation Generation Approachmentioning
confidence: 99%