2021
DOI: 10.1093/bioinformatics/btab533
|View full text |Cite
|
Sign up to set email alerts
|

Transfer learning via multi-scale convolutional neural layers for human–virus protein–protein interaction prediction

Abstract: Motivation To complement experimental efforts, machine learning-based computational methods are playing an increasingly important role to predict human-virus protein-protein interactions (PPIs). Furthermore, transfer learning can effectively apply prior knowledge obtained from a large source dataset/task to a small target dataset/task, improving prediction performance. Results To predict interactions between human and viral p… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
48
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(48 citation statements)
references
References 48 publications
0
48
0
Order By: Relevance
“…We compared the performance of cross-attention PHV with seven state-of-the-art methods, including Denovo [11], Zhou et al’s SVM-based method [14], Alguwaizani et al’s SVM-based method [15], Yang et al’s random forest–based and Doc2vec-based method [24], DeepViral [17], and Yang et al’s CNN-based method [16], using Denovo’s test dataset. As shown in Table 3, cross-attention PHV predicted the PPIs with an AC value >0.95 and outperformed the state-of-the-art models in five metrics, including SN, AC, AUC, MCC, and F1, demonstrating the superiority of cross-attention PHV.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compared the performance of cross-attention PHV with seven state-of-the-art methods, including Denovo [11], Zhou et al’s SVM-based method [14], Alguwaizani et al’s SVM-based method [15], Yang et al’s random forest–based and Doc2vec-based method [24], DeepViral [17], and Yang et al’s CNN-based method [16], using Denovo’s test dataset. As shown in Table 3, cross-attention PHV predicted the PPIs with an AC value >0.95 and outperformed the state-of-the-art models in five metrics, including SN, AC, AUC, MCC, and F1, demonstrating the superiority of cross-attention PHV.…”
Section: Resultsmentioning
confidence: 99%
“…Deep learning–based models have overcome such problems, however. For example, Yang et al embedded local features such as binding motifs into feature matrices and captured their patterns using a convolutional neural network (CNN) [16]. They applied two different transfer learning methods to improve the generalizability of the model.…”
Section: Introductionmentioning
confidence: 99%
“…PSSM is generated by applying Position-Specific Iterative (PSI)-BLAST searching in the protein database (like the UniRef50 database [54] ). In DPPI [33] and TransPPI [55] , the PSSM is a matrix , where is the length of the protein sequence and each element in the matrix denotes the probability of the amino acid in the position of the sequence. The only drawback of this method is that it needs an enormous effort for PSI-BLAST searching.…”
Section: Deep Learning Methodologymentioning
confidence: 99%
“…This approach [55] employs four connected convolutional layers followed with the pooling layers within a Siamese-like architecture to capture the latent patterns in the input protein sequence. The prediction module concatenates a pair of protein representations generated from two identical sub-networks and passes them through three stacked fully-connected layers followed with the leakyReLU activation.…”
Section: Deep Learning Methodologymentioning
confidence: 99%
“…Deep learning–based models have overcome such problems. For example, Yang et al embedded local features such as binding motifs into feature matrices and captured their patterns using a convolutional neural network (CNN) [19] . They applied two different transfer learning methods to improve the generalizability of the model.…”
Section: Introductionmentioning
confidence: 99%