2019
DOI: 10.3389/fimmu.2019.02559
|View full text |Cite
|
Sign up to set email alerts
|

DeepHLApan: A Deep Learning Approach for Neoantigen Prediction Considering Both HLA-Peptide Binding and Immunogenicity

Abstract: Neoantigens play important roles in cancer immunotherapy. Current methods used for neoantigen prediction focus on the binding between human leukocyte antigens (HLAs) and peptides, which is insufficient for high-confidence neoantigen prediction. In this study, we apply deep learning techniques to predict neoantigens considering both the possibility of HLA-peptide binding (binding model) and the potential immunogenicity (immunogenicity model) of the peptide-HLA complex (pHLA). The binding model achieves comparab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
86
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 107 publications
(87 citation statements)
references
References 54 publications
1
86
0
Order By: Relevance
“…We screened multiple layer types in the sequence data embedding block: recurrent layers (bidirectional GRU and LSTM), self‐attention, convolutional layers (simple convolutions and inception‐like), and densely connected layers as a reference. Recurrent layer types and self‐attention layers were previously useful for modeling language (Vaswani et al , ) and epitope (Wu et al , ) data. Convolutional layer types have been useful for modeling epitope (Han & Kim, ; Vang & Xie, ) and image (Szegedy et al , ) data.…”
Section: Methodsmentioning
confidence: 99%
“…We screened multiple layer types in the sequence data embedding block: recurrent layers (bidirectional GRU and LSTM), self‐attention, convolutional layers (simple convolutions and inception‐like), and densely connected layers as a reference. Recurrent layer types and self‐attention layers were previously useful for modeling language (Vaswani et al , ) and epitope (Wu et al , ) data. Convolutional layer types have been useful for modeling epitope (Han & Kim, ; Vang & Xie, ) and image (Szegedy et al , ) data.…”
Section: Methodsmentioning
confidence: 99%
“…In addition, we also compared the precision indicator, which was calculated by the ratio of the true positive to the predicted positive peptides. In the test set, the precision indicator of NetMHCpan 4.0 was 54.55%, while the precision indicator of POTN was 67.44%, with 23.63% improvement [the method for improvement rate calculation was referred to ( 68 )].…”
Section: Resultsmentioning
confidence: 99%
“…[141] During the past few years, a number of deep learningbased methods have been developed that outperform traditional machine learning methods, including shallow neural networks, for peptide-MHC binding prediction ( Table 4). Among these algorithms, 14 (ConvMHC, [142] HLA-CNN, [143] DeepMHC, [144] DeepSeqPan, [145] MHCSeqNet, [146] MHCflurry, [147] DeepHLApan, [148] ACME, [149] EDGE, [137] CNN-NF, [150] DeepNeo, [151] DeepLigand, [152] MHCherryPan, [153] and DeepAttentionPan [141] ) are specific for MHC class I binding prediction, three (DeepSeqPanII, [154] MARIA, [138] and NeonMHC2 [139] ) are specific for MHC class II binding prediction, and four (AI-MHC, [155] MHCnuggets, [156] PUFFIN, [157] and USMPep [158] ) can make predictions for both classes. All four types of peptide encoding approaches illustrated in Figure 1 are used in these tools, with one-hot encoding and BLOSUM matrix encoding being the most frequently used methods (Table 4).…”
Section: Deep Learning For Mhc-binding Peptide Predictionmentioning
confidence: 99%