2022
DOI: 10.1016/j.patcog.2022.108743
|View full text |Cite
|
Sign up to set email alerts
|

Believe the HiPe: Hierarchical perturbation for fast, robust, and model-agnostic saliency mapping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

4
6

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…Another area in which TB-AI falls behind is its deficiency in explainable artificial intelligence (XAI) techniques. Despite receiving considerable attention in multiple fields, including in healthcare and medical research [76], DL algorithms have not been widely implemented in clinical practice [77]. This is primarily due to the need for the enhanced transparency and interpretability of ML models, particularly in critical applications such as disease diagnosis and treatment.…”
Section: Discussionmentioning
confidence: 99%
“…Another area in which TB-AI falls behind is its deficiency in explainable artificial intelligence (XAI) techniques. Despite receiving considerable attention in multiple fields, including in healthcare and medical research [76], DL algorithms have not been widely implemented in clinical practice [77]. This is primarily due to the need for the enhanced transparency and interpretability of ML models, particularly in critical applications such as disease diagnosis and treatment.…”
Section: Discussionmentioning
confidence: 99%
“…In this section we employ Hierarchical Perturbation (HiPe) [ 24 ] and standard iterative perturbation [ 25 ] to understand how the model is able to identify CD3 expressing lymphocytes. These methods are widely used for deep learning interpretability as they offer intuitive visual interpretations of which regions in the input were more or less important in determining the model’s output.…”
Section: Discussionmentioning
confidence: 99%
“…For this, we use Hierarchical Perturbation (HiPe), 16 a saliency-mapping method which is both model-agnostic and highly computationally efficient – this is necessary in order to mitigate the huge computational cost of pixel-level attribution on gigapixel input images. Given that our model is dual-stage, combining first feature extraction followed by a classification network, and that in this work we tested different model architectures, patch sizes, and other hyperparameters, model agnosticism was also key in this choice of saliency algorithm.…”
Section: Methodsmentioning
confidence: 99%