2022
DOI: 10.1038/s41598-022-13951-2
|View full text |Cite
|
Sign up to set email alerts
|

Multi-task learning to leverage partially annotated data for PPI interface prediction

Abstract: Protein protein interactions (PPI) are crucial for protein functioning, nevertheless predicting residues in PPI interfaces from the protein sequence remains a challenging problem. In addition, structure-based functional annotations, such as the PPI interface annotations, are scarce: only for about one-third of all protein structures residue-based PPI interface annotations are available. If we want to use a deep learning strategy, we have to overcome the problem of limited data availability. Here we use a multi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 62 publications
0
9
0
Order By: Relevance
“…Firstly, there isn’t always a clear reason for a head-on comparison with other methods. You may, for example, be setting out to find the added value (or not) of specific parts of your training procedure (e.g., [ 15 ]) or of the architecture (e.g., [ 16 , 17 ]). First point of business will be to identify the current state of the art, which you can usually find in a recent benchmarking review.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Firstly, there isn’t always a clear reason for a head-on comparison with other methods. You may, for example, be setting out to find the added value (or not) of specific parts of your training procedure (e.g., [ 15 ]) or of the architecture (e.g., [ 16 , 17 ]). First point of business will be to identify the current state of the art, which you can usually find in a recent benchmarking review.…”
Section: Introductionmentioning
confidence: 99%
“…We also introduced a broader benchmark set ProteinGLUE including mutiple prediction tasks: secondary structure, solvent accessibility, PPI, epitopes, and hydrophobic patch prediction [ 16 ]. Many method papers will also include an update of latest developments (e.g., [ 15 , 44 ]).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally, an MTL model can be trained by linearly combining loss functions from different tasks into a single total loss function [15]. In this way, the model can learn a shared representation for all tasks by stochastic gradient descent (SGD) with back-propagation [15,43]. Ordinarily, assuming that there are M tasks in all, the global loss function can be defined as where L i represents task-specific loss function, and w i denotes weights assigned for each L i .…”
Section: Details Of Mtl Architecturementioning
confidence: 99%
“…Many techniques for protein structure prediction have been intensively studied in recent years, and scientists have developed an increasing number of creative models to boost prediction performance. Database annotation and sequence-based approaches are the two main approaches used in this area [2]. In order to make predictions, sequence-based attempts to extract unique features from protein sequences.…”
Section: Introductionmentioning
confidence: 99%