2021
DOI: 10.48550/arxiv.2103.02561
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ICAM-reg: Interpretable Classification and Regression with Feature Attribution for Mapping Neurological Phenotypes in Individual Scans

Abstract: An important goal of medical imaging is to be able to precisely detect patterns of disease specific to individual scans; however, this is challenged in brain imaging by the degree of heterogeneity of shape and appearance. Traditional methods, based on image registration to a global template, historically fail to detect variable features of disease, as they utilise population-based analyses, suited primarily to studying group-average effects. In this paper we therefore take advantage of recent developments in g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 43 publications
0
6
0
Order By: Relevance
“…One class of methods utilise variational encodings such as variational autoencoders (VAEs) [18], and VAE-GANs [22], although the authors found that attempts to adapt these methods to surface domains were unsuccessful, with models unable to capture individual cortical folding variation and collapsing to group averages. A more powerful example of this applied to volumetric cortical data is the iCAM architecture [3], which encodes separate variational disentangled spaces for content and age, which may be more amenable to adaptation with gDL. There are further alternatives that utilise direct conditioning on latent variables such as conditional VAEs and conditional GANs, but these too are more commonly associated with conditioning on classes, not continuous variables.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…One class of methods utilise variational encodings such as variational autoencoders (VAEs) [18], and VAE-GANs [22], although the authors found that attempts to adapt these methods to surface domains were unsuccessful, with models unable to capture individual cortical folding variation and collapsing to group averages. A more powerful example of this applied to volumetric cortical data is the iCAM architecture [3], which encodes separate variational disentangled spaces for content and age, which may be more amenable to adaptation with gDL. There are further alternatives that utilise direct conditioning on latent variables such as conditional VAEs and conditional GANs, but these too are more commonly associated with conditioning on classes, not continuous variables.…”
Section: Discussionmentioning
confidence: 99%
“…Deep Generative modelling presents enormous opportunities for medical imaging analysis: from image segmentation [13,9,40,8], registration [39], motion correction and denoising [30,42,23,1], to anomaly detection [14,45,4] and the development of clinically interpretable models of disease progression [2,3,25]. Image-toimage translation is a type of generative modelling problem where images are transformed across domains, in a way that preserves their content.…”
Section: Introductionmentioning
confidence: 99%
“…In the alignment stage, a layer cannot learn unless its upper layers are roughly aligned. Bass et al [10], who developed the human connectome project, used UK Biobank datasets. Their method was validated via Mini-Mental State Examination cognitive test score prediction for Alzheimer's disease The neuroimaging initiative cohort and brain age prediction for both neurodevelopment and neurodegeneration were considered.…”
Section: Feedback Alignmentmentioning
confidence: 99%
“…These measures can be computed on the basis of the following Eqs. (10)(11)(12)(13)(14)(15)(16)(17)(18)(19) [72]:…”
Section: Datasetsmentioning
confidence: 99%
“…Thus, this kind of method seems to be more valuable to understand the model decision. However, to the best of our knowledge, there is currently few interpretable methods for AD classification, with our definition [38,39].…”
Section: Current Limitations Of DL In Ad Classificationmentioning
confidence: 99%