2020
DOI: 10.48550/arxiv.2005.13607
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-View Graph Neural Networks for Molecular Property Prediction

Hehuan Ma,
Yatao Bian,
Yu Rong
et al.

Abstract: The crux of molecular property prediction is to generate meaningful representations of the molecules. One promising route is to exploit the molecular graph structure through Graph Neural Networks (GNNs). It is well known that both atoms and bonds significantly affect the chemical properties of a molecule, so an expressive model shall be able to exploit both node (atom) and edge (bond) information simultaneously. Guided by this observation, we present Multi-View Graph Neural Network (MV-GNN), a multi-view messa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 18 publications
(18 citation statements)
references
References 31 publications
0
18
0
Order By: Relevance
“…On the other hand, for GNN architectures as a comparison, extensive pre-training can be necessary, e.g., MolCLR requires around 5 days of pre-training using Nvidia® Quadro RTX™ 6000 as reported in the corresponding literature [37]. GNN models are also harder to implement considering the effort of establish multiple layers with considerable amount of nodes, especially the necessity of back-propagation during training with millions of parameters in total [23].…”
Section: Molehd Vs Baseline Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, for GNN architectures as a comparison, extensive pre-training can be necessary, e.g., MolCLR requires around 5 days of pre-training using Nvidia® Quadro RTX™ 6000 as reported in the corresponding literature [37]. GNN models are also harder to implement considering the effort of establish multiple layers with considerable amount of nodes, especially the necessity of back-propagation during training with millions of parameters in total [23].…”
Section: Molehd Vs Baseline Modelsmentioning
confidence: 99%
“…Thus, MoleHD only needs to run on commodity CPU and can finish both training and testing on the reported datasets within minutes, while GNN requires around 5 days for just training using Nvidia GPU [37]. (3) smaller memory footprint: MoleHD only needs to store a set of vectors for comparison during inference which is usually less than 10 MB, while SOTA neural networks often need millions of nodes and requires memory in 100MB scale to just store the parameters (e.g., weights and activation values) [23].…”
Section: Introductionmentioning
confidence: 99%
“…Because of the vast application of small graphs, numerous algorithms have been proposed to obtain their information by extracting their features or learning how similar two graphs are [3,11,26,30,33,36,43], especially using GCNs [3,26,33,36,43]. In particular, SimGNN [3] proposed a GCN-based approach to learn a similarity score for such graphs.…”
Section: Introductionmentioning
confidence: 99%
“…G RAPH embedding models [1]- [5], which elaborate the expressive power of deep learning on graph-structure data, have achieved remarkable success in various domains, such as drug discovery [6]- [9], social network analysis [10]- [12], computer version [13]- [16], medical imaging [16]- [18], financial surveillance [19], structural role classification [20]- [22] and automated machine learning [23]. Given the increasing popularity and success of these methods, a bunch of recent works have posed the risk of graph embedding models against adversarial attacks, just like what the researchers are anxious for convolutional neural networks [24].…”
Section: Introductionmentioning
confidence: 99%