2022
DOI: 10.1002/hbm.25813
|View full text |Cite
|
Sign up to set email alerts
|

Attention module improves both performance and interpretability of four‐dimensional functional magnetic resonance imaging decoding neural network

Abstract: Decoding brain cognitive states from neuroimaging signals is an important topic in neuroscience. In recent years, deep neural networks (DNNs) have been recruited for multiple brain state decoding and achieved good performance. However, the open question of how to interpret the DNN black box remains unanswered. Capitalizing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
16
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 41 publications
1
16
0
Order By: Relevance
“…If clinicians are to use clinical decision support systems, they are ethically obligated to be able to explain the recommendations of such systems to their patients [ 11 ]. Both explainability methods [ 2 , 40 , 41 ] and more recently developed interpretable models [ 42 – 44 ] have been used extensively within the domain of neuroimaging analysis. Nevertheless, with the exception of our preliminary work on this topic [ 1 ], explainability methods and approaches for estimating model confidence have, to our knowledge, not yet been integrated.…”
Section: Introductionmentioning
confidence: 99%
“…If clinicians are to use clinical decision support systems, they are ethically obligated to be able to explain the recommendations of such systems to their patients [ 11 ]. Both explainability methods [ 2 , 40 , 41 ] and more recently developed interpretable models [ 42 – 44 ] have been used extensively within the domain of neuroimaging analysis. Nevertheless, with the exception of our preliminary work on this topic [ 1 ], explainability methods and approaches for estimating model confidence have, to our knowledge, not yet been integrated.…”
Section: Introductionmentioning
confidence: 99%
“…Apart from methods discussed, some researchers have also experimented with intrinsic methods. In Jiang et al (2022), an attentional module parallel to feature extraction was added to FIGURE GBP, Grad-CAM, and Guided Grad-CAM. GBP is identical to a backward pass, but only considers the highest gradient when passing through a non-linearity.…”
Section: Interpretationmentioning
confidence: 99%
“…A number of studies addressed prediction outside the field of brain pathology. Task-based fMRI data have been used to predict task state (Jang et al, 2017 ; Hu et al, 2019 ; Vu et al, 2020 ; Wang et al, 2020b ; Jiang et al, 2022 ; Ngo et al, 2022 ), while EEG data were used to predict attentional state (Zhang et al, 2021 ), sleep stage (Abou Jaoude et al, 2020 ; Akada et al, 2021 ) and brain age (Levakov et al, 2020 ; Niu et al, 2020 ; Ning et al, 2021 ; Ren et al, 2022 ), recognize emotions (Wang et al, 2020a ; Ramzan and Dawn, 2021 ; Bagherzadeh et al, 2022 ; Xiao et al, 2022 ), detect P300 (Solon et al, 2019 ; Borra et al, 2021 ), cortical oscillatory activity (Abdul Nabi Ali et al, 2022 ) and cortical activity during sleep (Li et al, 2020a ). Recently, several studies have used DL to decode motor imagery (Hassanpour et al, 2019 ; Ebrahimi et al, 2020 ; Xu et al, 2020 ; Dehghani et al, 2021 ; Fan et al, 2021 ), which is important in brain-computer interface.…”
Section: Deep Learning Applications In Neuroimagingmentioning
confidence: 99%
See 2 more Smart Citations