2022
DOI: 10.3390/molecules27031030
|View full text |Cite
|
Sign up to set email alerts
|

Length-Dependent Deep Learning Model for RNA Secondary Structure Prediction

Abstract: Deep learning methods for RNA secondary structure prediction have shown higher performance than traditional methods, but there is still much room to improve. It is known that the lengths of RNAs are very different, as are their secondary structures. However, the current deep learning methods all use length-independent models, so it is difficult for these models to learn very different secondary structures. Here, we propose a length-dependent model that is obtained by further training the length-independent mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…Among the methods developed for predicting RNA secondary structure, a minority can accurately predict the secondary structure of RNAs containing both canonical and noncanonical base pairs (e.g., ). Machine learning and deep learning methods of RNA secondary structure prediction have been proposed (see, for example, references ), many of which demonstrate proficiency in handling both canonical and noncanonical interactions. , While these methods have achieved considerable accuracy, ,,, challenges persist, especially for long RNA sequences (>500 nucleotides). Indeed, the complexity of possible secondary structures in longer sequences makes accurate prediction more challenging, and further improvements are still needed.…”
Section: Computational Methods For Predicting Non-coding Rna–disease ...mentioning
confidence: 99%
“…Among the methods developed for predicting RNA secondary structure, a minority can accurately predict the secondary structure of RNAs containing both canonical and noncanonical base pairs (e.g., ). Machine learning and deep learning methods of RNA secondary structure prediction have been proposed (see, for example, references ), many of which demonstrate proficiency in handling both canonical and noncanonical interactions. , While these methods have achieved considerable accuracy, ,,, challenges persist, especially for long RNA sequences (>500 nucleotides). Indeed, the complexity of possible secondary structures in longer sequences makes accurate prediction more challenging, and further improvements are still needed.…”
Section: Computational Methods For Predicting Non-coding Rna–disease ...mentioning
confidence: 99%
“…Recently, 2dRNA was improved to take the sequence length of RNA as a feature and showed better performance. 79 These ML-based methods all show higher prediction accuracy than traditional methods of RNA secondary structure prediction. However, their prediction accuracy still leaves room for improvement, especially for long sequences, e.g., longer than 500 residues.…”
Section: Recent Advances In Rna 3d Structure Predictionmentioning
confidence: 98%
“…This is good for secondary structure prediction, where the local features add a benefit to the prediction of the local structures like stems or hairpins while the global features about these local structures provide information on the global structure, e.g., for pseudoknots. Recently, 2dRNA was improved to take the sequence length of RNA as a feature and showed better performance …”
Section: Recent Advances In Rna 3d Structure Predictionmentioning
confidence: 99%
“…As existing large RNA secondary structure datasets are curated largely via comparative sequence analysis [19,20], this study focuses on the class of single-sequence-based DL models, referred to as de novo DL models. A number of highly successful de novo DL models have been reported, such as 2dRNA [21], ATTfold [22], DMfold [23], E2Efold [24], MXfold2 [25], SPOT-RNA [26], and Ufold [27], among others [28][29][30][31]. These DL models markedly outperform traditional algorithms, with even close-to-perfect predictions in some cases, though questions on the training vs. test similarity have been raised [32,33] and discussed below.…”
Section: Introductionmentioning
confidence: 99%