2022
DOI: 10.1101/2022.05.03.490535
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Principled feature attribution for unsupervised gene expression analysis

Abstract: As interest in unsupervised deep learning models for the analysis of gene expression data has grown, an increasing number of methods have been developed to make these deep learning models more interpretable. These methods can be separated into two groups: (1) post hoc analyses of black box models through feature attribution methods and (2) approaches to build inherently interpretable models through biologically-constrained architectures. In this work, we argue that these approaches are not mutually exclusive, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 81 publications
0
3
0
Order By: Relevance
“…One could still apply a post-hoc explanation to the by-design neural networks (e.g., as in Elmarakeby et al (2021)) to explain the model prediction using importance scores over input features rather than model internals. In addition, a recent method, named PAUSE (Janizek et al, 2022), demonstrates one way to bridge post-hoc and by-design methods. Specifically, PAUSE is a biologically-constrained autoencoder model that is explained using a post-hoc game theoretic approach.…”
Section: Preliminaries On Iml Methodsmentioning
confidence: 99%
“…One could still apply a post-hoc explanation to the by-design neural networks (e.g., as in Elmarakeby et al (2021)) to explain the model prediction using importance scores over input features rather than model internals. In addition, a recent method, named PAUSE (Janizek et al, 2022), demonstrates one way to bridge post-hoc and by-design methods. Specifically, PAUSE is a biologically-constrained autoencoder model that is explained using a post-hoc game theoretic approach.…”
Section: Preliminaries On Iml Methodsmentioning
confidence: 99%
“…To date, it is unclear whether or how robustness and bias-susceptibility affect different biology-inspired deep learning models. Indeed, broadly reviewing biology-inspired models [13][14][15]17,18,[22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41] , we found that (out of 25 models) only the BIOS model 25 was trained in replicates and only the DTox model 23 3 compared interpretations to networks trained on shuffled labels to rigorously control robustness and network biases.…”
Section: Introductionmentioning
confidence: 99%
“…Deep neural networks, particularly autoencoders, are extensively employed in integrating and analyzing single-cell data, demonstrating outstanding performance in tasks such as batch correction, dimension reduction, and perturbation modeling (Lopez et al, 2018;Inecik et al, 2022;Heumos et al, 2023). While biologically informed deep learning is an active research area (Lotfollahi et al, 2023;Qoku & Buettner, 2023;Conard et al, 2023;Janizek et al, 2023), a method is currently lacking that enables robust and flexible manipulation of an autoencoder model's behavior in order to enhance the influence of a freely chosen subset of input features on the latent space or the reconstruction process. In this study, we present scARE, single cell attribution 1 Institute of Computational Biology, Helmholtz Center Munich, Neuherberg, Germany 2 School of Life Sciences Weihenstephan, Technical University of Munich, Freising, Germany 3 Department of Mathematics, Technical University of Munich, Garching, Germany.…”
Section: Introductionmentioning
confidence: 99%