2021
DOI: 10.1109/tai.2021.3111138
|View full text |Cite
|
Sign up to set email alerts
|

MACE: Model Agnostic Concept Extractor for Explaining Image Classification Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…Word embedding technology is still developing and some pretrained language models are proposed to measure the similarity between words. Word2Vec was proposed in 2013 [ 8 ], GloVe in 2014 [ 9 ], OpenAIGPT in 2016 [ 10 ], ELMo (embeddings from language models) [ 11 , 12 ] and BERT (bidirectional encoder representation from transformers) were proposed in 2018 [ 13 , 14 ], and Transformer-XL [ 5 ] and XLNet [ 13 ] based on Transformers [ 15 ] architecture were proposed in 2019. The pretraining models commonly used in short-text affective orientation analysis include Word2Vec, GloVe, and BERT.…”
Section: Introductionmentioning
confidence: 99%
“…Word embedding technology is still developing and some pretrained language models are proposed to measure the similarity between words. Word2Vec was proposed in 2013 [ 8 ], GloVe in 2014 [ 9 ], OpenAIGPT in 2016 [ 10 ], ELMo (embeddings from language models) [ 11 , 12 ] and BERT (bidirectional encoder representation from transformers) were proposed in 2018 [ 13 , 14 ], and Transformer-XL [ 5 ] and XLNet [ 13 ] based on Transformers [ 15 ] architecture were proposed in 2019. The pretraining models commonly used in short-text affective orientation analysis include Word2Vec, GloVe, and BERT.…”
Section: Introductionmentioning
confidence: 99%
“…Wang et al [83] generate image patches [139] and use an attention mechanism to estimate the salient regions in a given image. However, a major limitation of these saliency map approaches is that they almost always highlight the region containing the entire object to be salient [38,63,64]. While these explanations can ascertain whether the model looks at the object to arrive at its prediction or relies on any non-object spurious correlations [32,34], finer explanations depicting the contributions of image primitives such as colors, textures, and parts cannot be obtained from the Class Activation Maps.…”
Section: Posthoc Methodsmentioning
confidence: 99%
“…Yeh et al [91] propose automatically extracting the complete set of concepts from the data, thereby preventing a possible loss of faithfulness due to leveraging concepts sampled from a different distribution [135]. Kumar et al [63] extend the capability of this framework [91] to unravel the complete blueprint of a class by formulating the concepts to be clustered in a class-specific fashion [52]. However, while extracting the explanations, these frameworks use multilayer nonlinear networks, which are also black boxes whose working could not be unraveled.…”
Section: Concept-based Explanationsmentioning
confidence: 99%
See 1 more Smart Citation
“…ICE measures the importance of its class-wise concepts using TCAV. Other methods learn concept vectors and a mapping to feature space either for all classes simultaneously (ConceptSHAP [8]) or for each class separate (MACE [26], PACE [27]). Importantly, each method defines a custom measure for concept importance which is only applicable within the respective framework.…”
Section: Concept Similarities To Quantitatively Characterize the Simi...mentioning
confidence: 99%