2022
DOI: 10.3389/fninf.2021.802938
|View full text |Cite
|
Sign up to set email alerts
|

Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network

Abstract: Deep neural networks (DNNs) can accurately decode task-related information from brain activations. However, because of the non-linearity of DNNs, it is generally difficult to explain how and why they assign certain behavioral tasks to given brain activations, either correctly or incorrectly. One of the promising approaches for explaining such a black-box system is counterfactual explanation. In this framework, the behavior of a black-box system is explained by comparing real data and realistic synthetic data t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…StarGAN performs image translation among more than one pair of classes. In [29], counterfactual activation generator (CAG) implement image transformation for seven classes. This setting extracts task-sensitive features from brain activations by equating ground truth and real and synthetic images.…”
Section: A Mri-to-ct Translationmentioning
confidence: 99%
“…StarGAN performs image translation among more than one pair of classes. In [29], counterfactual activation generator (CAG) implement image transformation for seven classes. This setting extracts task-sensitive features from brain activations by equating ground truth and real and synthetic images.…”
Section: A Mri-to-ct Translationmentioning
confidence: 99%