2020
DOI: 10.1109/access.2020.2966592
|View full text |Cite
|
Sign up to set email alerts
|

Identification of Protein Lysine Crotonylation Sites by a Deep Learning Framework With Convolutional Neural Networks

Abstract: Protein lysine crotonylation (Kcr) is an important type of post-translational modification that regulates various activities. The experimental approaches to identify the Kcr sites are time-consuming and it is necessary to develop computational prediction approaches. Previously, a few classifiers were based on over 100 Kcr sites from histone proteins. Recently, thousands of Kcr sites have been experimentally verified on non-histone proteins from the plant species Papaya. We found that the previous classifiers f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 31 publications
(39 citation statements)
references
References 26 publications
0
38
1
Order By: Relevance
“…As the DL algorithms showed superior to the traditional ML algorithms for a few PTM predictions in our previous studies [21,22] Fig. 3).…”
Section: Cnnoh Showed Superior Performancementioning
confidence: 52%
See 2 more Smart Citations
“…As the DL algorithms showed superior to the traditional ML algorithms for a few PTM predictions in our previous studies [21,22] Fig. 3).…”
Section: Cnnoh Showed Superior Performancementioning
confidence: 52%
“…The EGAAC feature [22] is developed based on the grouped amino acids content (GAAC) feature [28,29]. In the GAAC feature, the 20 amino acid types are categorized into five groups (g1: GAVLMI, g2: FYW, g3: KRH, g4: DE and g5:…”
Section: The Enhanced Grouped Amino Acids Content (Egaac) Encodingmentioning
confidence: 99%
See 1 more Smart Citation
“…We compared the performances of these algorithms in terms of Acc, Sn, MCC, AUC and AUC01 values for both the ten-fold cross validation and the independent test (Table 1). In our previous studies DL models showed superior performance than traditional ML models [33,42]. It is still true for the CSO prediction.…”
Section: Lstm We Classifier Performed Favorably To Other ML Classifiersmentioning
confidence: 90%
“…LSTM WE again showed the largest AUC01 values for both ten-fold cross-validation and the independent test ( Figure 4B&D). As the encoding approach has great impact to the traditional ML models [33,42,43] and the WE approach integrated with LSTM had the best performance in this study, we attempted to investigate whether the integration of WE and RF had a good performance. Accordingly, we extracted embedding layer vector as feature encoding from LSTM WE and trained the RF model, dubbed RF WE .…”
Section: Lstm We Classifier Performed Favorably To Other ML Classifiersmentioning
confidence: 99%