2012
DOI: 10.1016/j.eswa.2011.12.059
|View full text |Cite
|
Sign up to set email alerts
|

Towards designing modular recurrent neural networks in learning protein secondary structures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…In reality, using an ingenious ANN can accurately predict the boundaries of a class. Experimental results of various research have shown that recurrent neural networks (RNN) [18][19][20] are very effective for processing protein sequence data. A variant of RNN called bidirectional recurrent neural networks is known to exploit the data of the complete structure.…”
Section: Related Workmentioning
confidence: 99%
“…In reality, using an ingenious ANN can accurately predict the boundaries of a class. Experimental results of various research have shown that recurrent neural networks (RNN) [18][19][20] are very effective for processing protein sequence data. A variant of RNN called bidirectional recurrent neural networks is known to exploit the data of the complete structure.…”
Section: Related Workmentioning
confidence: 99%
“…Neural networks and deep learning [2,4,5,12,40,44,49,50] Support vector machines (SVM) [32,39,55,56] Multi-component approaches [2,4,5,9,12,25,41,49,53] [ 7,18,39,44,50,[54][55][56] Probabilistic and mining methods: Primary probabilistic methods [11,14] are based on empirical analytics and mainly compute the tendency of each amino-acid in protein sequence to form a particular secondary structure (i.e. probabilities are calculated based on the frequency of each amino-acid in each secondary class).…”
Section: Naturalmentioning
confidence: 99%
“…The first group of multi-component approaches [9,12,41,49,53] employ complementary modules beside the learning algorithms to promote the prediction results. Similarly, aiming to foster prediction accuracy, the second category [2,4,5,25,39] exploit multiple classifiers of the same type with various features. The third class of Multi-component approaches [7,18,44,50,[54][55][56] combine classifiers of different types [1] [35] that are at times equipped with the complementary components.…”
Section: Naturalmentioning
confidence: 99%
“…Such components have valuable applications, especially to vision and language datasets [13,14]. Deep NNs (DNN) have been applied successfully in speech recognition [15][16][17][18][19], voice conversion [20,21], bio-informatics [22][23][24], face recognition [11,25,26], object recognition [27,28], dimensionality reduction [10,[29][30][31][32], and modeling textures [33].…”
Section: Introductionmentioning
confidence: 99%