2021
DOI: 10.1021/acs.jcim.1c01118
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study of Marginalized Graph Kernel and Message-Passing Neural Network

Abstract: This work proposes a state-of-the-art hybrid kernel to calculate molecular similarity. Combining with Gaussian process models, the performance of the hybrid kernel in predicting molecular properties is comparable to that of the Directed Message Passing Neural Network (D-MPNN). The hybrid kernel consists of a marginalized graph kernel (MGK) and a radial basis function (RBF) kernel that operates on molecular graphs and global molecular features, respectively. Bayesian optimization was used to get the optimal … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
16
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(17 citation statements)
references
References 28 publications
1
16
0
Order By: Relevance
“…Similarly, κ e (•,•) is the bond kernel that computes the similarity between a pair of bonds, which is defined similarly to the atom kernel. For details on the features of atoms, bonds, and associated kernel functions, we refer the reader to ref .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Similarly, κ e (•,•) is the bond kernel that computes the similarity between a pair of bonds, which is defined similarly to the atom kernel. For details on the features of atoms, bonds, and associated kernel functions, we refer the reader to ref .…”
Section: Methodsmentioning
confidence: 99%
“…Of note, in previous work, , Xiang et al set the starting probability using a graph-size-dependent normalization to enhance model performance in cross validation ( G , G ) = F K ( G , G ) K ( G , G ) K ( G , G ) exp [ ( K false( G , G false) K false( G , G false) ) 2 λ 2 ] where F and λ are hyperparameters to be optimized. However, eq violates the additivity of the graph kernel to graph nodes, which, as demonstrated later, is a prerequisite for atomic attribution calculation.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“… Box Bounding ( Netzeva et al., 2005 ) Convex Hull ( Jaworska et al., 2005 ) DM ( Sheridan et al., 2004 ) SDC score ( Liu et al., 2018 ) NNAS ( Allen et al., 2020 ) Virtual screening ( Berenger and Yamanishi, 2019 ) Anticancer peptide activity prediction ( Chen et al., 2021 ) SARS-CoV 2 inhibitor prediction ( Gawriljuk et al., 2021 ) Toxicity prediction ( Jiang et al., 2021 ) Bayesian Parameters and outputs are treated as random variables and maximum a posteriori (MAP) estimation is adopted according to Bayes’ theorem. VI (MC-dropout) ( Gal and Ghahramani, 2016 ) BNN ( Goan and Fookes, 2020 ) GP-MGK ( Xiang et al., 2021 ) MVE ( Nix and Weigend, 1994 ) Bayesian GCN ( Ryu et al., 2019 ) Molecular property prediction ( Zhang and Lee, 2019 ) Virtual screening ( Ryu et al., 2019 ) Protein-ligand interaction prediction ( Kim et al., 2021 ) Ensemble-based The consistency of the predictions from various base models is an estimate of confidence. Bootstrapping ( Scalia et al., 2020 ) RF ( Sheridan, 2012 ) DeltaDelta ( Jimenez-Luna et al., 2019 ) Deep ensemble ( Lakshminarayanan et al., 2017 ) MC-dropout ( Gal and Ghahramani, 2016 ) Drug-likeness prediction ( Beker et al., 2020 ) Molecular property prediction ( Scalia et al., 2020 ) ...…”
Section: Methods Of Uncertainty Quantificationmentioning
confidence: 99%
“…Xiang et al. proposed a GP model with a hybrid kernel, GP-MGK, for molecular property prediction ( Xiang et al., 2021 ). They found that GP-MGK outperformed D-MPNN, a kind of graph convolutional neural network, regarding uncertainty quantification.…”
Section: Methods Of Uncertainty Quantificationmentioning
confidence: 99%