2018
DOI: 10.1002/pep2.24079
|View full text |Cite
|
Sign up to set email alerts
|

Classifying antimicrobial and multifunctional peptides with Bayesian network models

Abstract: Bayesian network models are finding success in characterizing enzyme‐catalyzed reactions, slow conformational changes, predicting enzyme inhibition, and genomics. In this work, we apply them to statistical modeling of peptides by simultaneously identifying amino acid sequence motifs and using a motif‐based model to clarify the role motifs may play in antimicrobial activity. We construct models of increasing sophistication, demonstrating how chemical knowledge of a peptide system may be embedded without requiri… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(23 citation statements)
references
References 67 publications
(132 reference statements)
0
23
0
Order By: Relevance
“…The task model is a convolutional neural network whose structure was partially based on that of a model from our past work in peptide modeling. 81 The model structure is shown schematically in Figure 1 . The first layer of the neural network is a convolutional layer with a weight matrix of dimension [ W × A × K ], where W and K conceptually represent peptide “motif widths” and number of “motif classes,” respectively, and A is the length of the amino acid alphabet considered (20 for the naturally occurring peptides used here).…”
Section: Methodsmentioning
confidence: 99%
“…The task model is a convolutional neural network whose structure was partially based on that of a model from our past work in peptide modeling. 81 The model structure is shown schematically in Figure 1 . The first layer of the neural network is a convolutional layer with a weight matrix of dimension [ W × A × K ], where W and K conceptually represent peptide “motif widths” and number of “motif classes,” respectively, and A is the length of the amino acid alphabet considered (20 for the naturally occurring peptides used here).…”
Section: Methodsmentioning
confidence: 99%
“…In addition, Table 5 summarizes the presented kernels. However, besides the popular algorithms mentioned above, further methods leveraged partial least squares [82, 83, 107], hidden Markov models [108], logistic regression [109] and Bayesian networks [110]. Furthermore, ensembles of several classifiers have been also successfully implemented, such as in [17] or [21], whereby often one classifier is trained with a particular sequence or structural encoding.…”
Section: Modelsmentioning
confidence: 99%
“…Machine learning has been used to aid in the discovery and development of AMPs, with increasing adoption as both fields have matured. A large body of work has been developed based on predictive modeling of AMP properties, [18][19][20][21][22][23][24][25][26][27] and is generally labelled as quantitative structure-activity relationship (QSAR) modeling. The basic QSAR approach is to select a property of interest (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Many recent approaches to AMP discovery have relied on predictive models, [18][19][20][21][22][23][24][25][26][27] using a naive candidate generation method and a rejection sampling approach to identify promising candidates. Since the candidate generation method is naive, often many samples must be drawn before a promising peptide is found.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation