2021
DOI: 10.21203/rs.3.rs-785618/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

TransMut: a program to predict HLA-I peptide binding and optimize mutated peptides for vaccine design by the Transformer-derived self-attention model

Abstract: Computational prediction of the interaction between human leukocyte antigen (HLA) and peptide (pHLA) can speed up epitope screening and vaccine design. Here, we develop the TransMut framework composed of TransPHLA for pHLA binding prediction and AOMP for mutated peptide optimization, which can be generalized to any binding and mutation task of biomolecules. Firstly, TransPHLA is developed by using a Transformer-derived self-attention model to predict pHLA binding, which is significantly superior to 11 previous… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 75 publications
0
6
0
Order By: Relevance
“…Transformer has achieved impressive success in machine translation tasks [46]. It also performs outstandingly to solve numerous high-level vision problems [63,10,47,50]…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Transformer has achieved impressive success in machine translation tasks [46]. It also performs outstandingly to solve numerous high-level vision problems [63,10,47,50]…”
Section: Related Workmentioning
confidence: 99%
“…Transformer has shown state-of-the-art performance on high-level vision tasks [51,2,63,26,10,47,50]. The SA mechanism has great power to capture content-dependent global representations while modeling long-distance relationships.…”
Section: Introductionmentioning
confidence: 99%
“…Addressing the quadratic computational complexity of the original ViT's self-attention mechanism, Liu et al designed an approach that employs window shifting for local self-attention within windows, promoting cross-window interactions and obtaining gratifying results. Recent works [13,14,15,16,17] have shifted their focus away from image classification to more advanced computer vision domains.…”
Section: Transformermentioning
confidence: 99%
“…ViT also uses one extra class token to aggregate information from the entire sequence of the patch tokens. Although the class token has been removed in a number of recent transformer methods [7,8,29], this work will underline its importance for weakly supervised semantic segmentation.…”
Section: Introductionmentioning
confidence: 99%