2013
DOI: 10.1016/j.ab.2013.05.024
|View full text |Cite
|
Sign up to set email alerts
|

iHSP-PseRAAAC: Identifying the heat shock protein families using pseudo reduced amino acid alphabet composition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
131
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
9
1

Relationship

2
8

Authors

Journals

citations
Cited by 279 publications
(131 citation statements)
references
References 83 publications
0
131
0
Order By: Relevance
“…Using Eq. (16) would make the meaning of the four metrics crystal clear even for experimental scientists as elaborated in [10,12,69,70].…”
Section: Performance Evaluation Methodsmentioning
confidence: 99%
“…Using Eq. (16) would make the meaning of the four metrics crystal clear even for experimental scientists as elaborated in [10,12,69,70].…”
Section: Performance Evaluation Methodsmentioning
confidence: 99%
“…In bioinformatics, the simplified amino acid alphabets are also widely used. The simplified alphabets are used, but not limited, in the prediction of protein-protein interaction and interface [126][127][128], nuclear receptors [129], DNA-binding proteins [130], defensin family and subfamily [131], heat shock protein family [132], residue flexibility [133,134], ability of protein crystallization [135] and various kinds of applications [136,137]. For example, the simplified alphabet (six groups) could be used to detect the sequence conservation related to the super-families of proteins [138,139].…”
Section: Implications Of Simplified Amino Acid Alphabetsmentioning
confidence: 99%
“…This is because almost all the existing machine-learning algorithms, such as "Neural Network" or NN algorithm [1][2][3] "Support Vector Machine" or SVM algorithm [4][5][6][7][8][9][10][11][12] "Nearest Neighbor" or NN algorithm [13,14] and "Random Forest" algorithm [15][16][17][18][19][20][21][22] can only handle vectors but not sequence samples as elucidated in a review paper [23]. Unfortunately, if using the sequential model, i.e., the model in which all the samples are represented by their original sequences, it is hardly able to train a machine learning model that can cover all the possible cases concerned, as elaborated in [24].…”
Section: Introductionmentioning
confidence: 99%