2023
DOI: 10.1186/s12864-023-09468-1
|View full text |Cite
|
Sign up to set email alerts
|

iEnhancer-DCSA: identifying enhancers via dual-scale convolution and spatial attention

Abstract: Background Due to the dynamic nature of enhancers, identifying enhancers and their strength are major bioinformatics challenges. With the development of deep learning, several models have facilitated enhancers detection in recent years. However, existing studies either neglect different length motifs information or treat the features at all spatial locations equally. How to effectively use multi-scale motifs information while ignoring irrelevant information is a question worthy of serious consi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“… Wang et al. 28 iEnhancer-GAN Building a CNN architecture by combining word embedding skip-gram and sequence generation adversarial networks Bao et al. 31 Enhancer-LSTMAtt After simple encoding of sequences, using Bi-LSTM and attention-based deep learning methods Huang et al.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“… Wang et al. 28 iEnhancer-GAN Building a CNN architecture by combining word embedding skip-gram and sequence generation adversarial networks Bao et al. 31 Enhancer-LSTMAtt After simple encoding of sequences, using Bi-LSTM and attention-based deep learning methods Huang et al.…”
Section: Resultsmentioning
confidence: 99%
“…They eventually developed the iEnhancer-DCSA model, a convolutional neural network employing dual-scale fusion. 28 Basith et al. utilized seven encodings, including DPCP and k-mer, and integrated five machine learning methods, such as RF, SVM, and XGB, to establish an enhancer prediction model.…”
Section: Introductionmentioning
confidence: 99%