2022
DOI: 10.1021/acsomega.2c05881
|View full text |Cite
|
Sign up to set email alerts
|

Improved Prediction Model of Protein and Peptide Toxicity by Integrating Channel Attention into a Convolutional Neural Network and Gated Recurrent Units

Abstract: In recent times, the importance of peptides in the biomedical domain has received increasing concern in terms of their effect on multiple disease treatments. However, before successful largescale implementation in the industry, accurate identification of peptide toxicity is a vital prerequisite. The existing computational methods have reached good results from toxicity prediction, and we present an improved model based on different deep learning architectures. The modification mainly focuses on two aspects: se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…The architectures for these methods include convolutional neural networks (CNNs), gated recurrent units (GRUs) and graph neural networks (GNNs) (Table 1). These methods are TOXIFY (26), ToxVec (33), ToxDL (22), ToxinPred2 (23), ATSE (34), ToxIBTL and ToxIBTL Variational Information Bottleneck (ToxIBTL VIB) (35,36). Contemporary Toxin classification models achieve competitive results, though sometimes they are high performing in a limited scope.…”
Section: Contemporary Methodsmentioning
confidence: 99%
“…The architectures for these methods include convolutional neural networks (CNNs), gated recurrent units (GRUs) and graph neural networks (GNNs) (Table 1). These methods are TOXIFY (26), ToxVec (33), ToxDL (22), ToxinPred2 (23), ATSE (34), ToxIBTL and ToxIBTL Variational Information Bottleneck (ToxIBTL VIB) (35,36). Contemporary Toxin classification models achieve competitive results, though sometimes they are high performing in a limited scope.…”
Section: Contemporary Methodsmentioning
confidence: 99%
“…They achieved an impressive AUROC of 85.00% in terms of predictive accuracy. In another study examining adverse treatment reactions, Zhao et al [207] created a model to forecast protein and peptide toxicity, incorporating channel attention mechanisms to improve feature extraction and reduce dimensionality. Their approach resulted in high accuracy rates of 97.38% and 95.03%.…”
Section: Treatment Responsementioning
confidence: 99%
“…Transformer-based language models can be divided into two main groups: those that are trained from scratch and those that are pretrained [18]. A scratch-trained model is a transformer model explicitly trained for a particular NLP task without using any pre-existing knowledge from other tasks or datasets [29][30][31]. The scratch-trained model is trained on a specific task-specific dataset, which is generally smaller than the pre-training datasets used for pre-trained models.…”
Section: Significance Of the Studymentioning
confidence: 99%