2019
DOI: 10.1186/s12859-019-2957-4
|View full text |Cite
|
Sign up to set email alerts
|

Visualizing complex feature interactions and feature sharing in genomic deep neural networks

Abstract: Background Visualization tools for deep learning models typically focus on discovering key input features without considering how such low level features are combined in intermediate layers to make decisions. Moreover, many of these methods examine a network’s response to specific input examples that may be insufficient to reveal the complexity of model decision making. Results We present DeepResolve, an analysis framework for deep convolutional models of genome functio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
21
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 29 publications
0
21
0
Order By: Relevance
“…However, attribution methods explored here are first-order interpretability methods, defining the importance of individual nucleotides -not the importance of the entire motif on model predictions. Although recent progress is extending these class of methods to second-order attributions 53,[56][57][58] , they cannot uncover the effect size of motifs on model predictions. Global interpretability analysis via in silico experiments is one avenue that shows great promise in uncovering the importance of whole features 59 .…”
Section: Discussionmentioning
confidence: 99%
“…However, attribution methods explored here are first-order interpretability methods, defining the importance of individual nucleotides -not the importance of the entire motif on model predictions. Although recent progress is extending these class of methods to second-order attributions 53,[56][57][58] , they cannot uncover the effect size of motifs on model predictions. Global interpretability analysis via in silico experiments is one avenue that shows great promise in uncovering the importance of whole features 59 .…”
Section: Discussionmentioning
confidence: 99%
“…The major drawback of DFIM is that it is computationally expensive: the interactions are inferred in a separate post-processing step and involves recalculation of network gradients. We note that the recent DeepResolve method infers feature importance and whether a feature participates in interactions with other features, but does not infer pairs of interacting features explicitly [Liu et al, 2019].…”
Section: Introductionmentioning
confidence: 96%
“…Recent progress has expanded the ability to probe interactions between putative motifs 3739 . For instance, MaxEnt Interpretation uses Markov Chain Monte Carlo to sample sequences that produce a similar activation profile in the penultimate layer of the DNN 37 , allowing for downstream analysis of these sequences.…”
Section: Introductionmentioning
confidence: 99%