2021
DOI: 10.1101/2021.07.13.452181
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Designing Interpretable Convolution-Based Hybrid Networks for Genomics

Abstract: Hybrid networks that build upon convolutional layers with attention mechanisms have demonstrated improved performance relative to pure convolutional networks across many regulatory genome analysis tasks. Their inductive bias to learn long-range interactions provides an avenue to identify learned motif-motif interactions. For attention maps to be interpretable, the convolutional layer(s) must learn identifiable motifs. Here we systematically investigate the extent that architectural choices in convolution-based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…We optimized the model architecture, including kernel size and max pooling steps, to capture relevant genomic motifs (Fig. 5a) 90,91 . To identify potential cis-regulatory elements, we used the filter weights from the CNN module of APA-Net, which represent learned sequence motifs.…”
Section: Resultsmentioning
confidence: 99%
“…We optimized the model architecture, including kernel size and max pooling steps, to capture relevant genomic motifs (Fig. 5a) 90,91 . To identify potential cis-regulatory elements, we used the filter weights from the CNN module of APA-Net, which represent learned sequence motifs.…”
Section: Resultsmentioning
confidence: 99%
“…First-layer convolutional filters provide an inductive bias to learn translational patterns such as motifs. However, the extent that they learn interpretable motifs largely depends on design choices, such as the max-pool size 40 , activation function 32 , or even the utilization of batch normalization 41 . However, it is not clear whether the same design principles established for binary models extends to quantitative models.…”
Section: Resultsmentioning
confidence: 99%
“…4). Though-to our knowledge-this class of models is new to proteins, convolution-attention hybrid models have been used in genomics and found to serve as a sound inductive bias for discovering motifs and their interactions 27,28 .…”
Section: A Hybrid Neural-network Predicts Nucleationmentioning
confidence: 99%