2021
DOI: 10.1109/tnnls.2020.2978386
|View full text |Cite
|
Sign up to set email alerts
|

A Comprehensive Survey on Graph Neural Networks

Abstract: Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed signific… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

7
3,423
0
21

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 6,787 publications
(3,451 citation statements)
references
References 114 publications
7
3,423
0
21
Order By: Relevance
“…From a machine learning perspective, KPNNs complement graph neural networks (GNNs) in interesting ways. GNNs build on domain network knowledge to define relationships between input nodes, which are iteratively updated based on their neighbors prior to predicting a given output 88,89 . Typical applications of GNNs include predictions of node labels (given labels of some nodes) or graph labels (given a set of graphs).…”
Section: Discussionmentioning
confidence: 99%
“…From a machine learning perspective, KPNNs complement graph neural networks (GNNs) in interesting ways. GNNs build on domain network knowledge to define relationships between input nodes, which are iteratively updated based on their neighbors prior to predicting a given output 88,89 . Typical applications of GNNs include predictions of node labels (given labels of some nodes) or graph labels (given a set of graphs).…”
Section: Discussionmentioning
confidence: 99%
“…Specifically, we consider the spatial organization of mRNAs inside tissues as a spatial functional network where different mRNA types interact based on their spatial proximity [ Figure 1], and where subcellular domains can be identified as clusters of local gene constellations that are shared or cell-type specific. In order to investigate the spatial mRNA network for recurrent gene constellations, we adopted a powerful graph representation learning technique [27] based on graph neural networks (GNN) [28], that has recently emerged as state-of-the-art machine learning technique for leveraging information from graph local neighborhoods. Therefore, each mRNA location is encoded in a graph as a node with a single feature representing the gene it belongs to and it is connected to all the other nodes representing the other mRNAs located in its neighborhood [ Figure 1a].…”
Section: Introductionmentioning
confidence: 99%
“…GCNs have been introduced in the machine learning literature a few years ago [2]. Their main advantage is that they can utilize the power of convolutional NN even for cases where spatial relationships are not complete [29,31]. Specifically, rather than encoding the data using a 2D matrix (or a 1D vector) GCNs use the graph structure to encode relationships between samples.…”
Section: Introductionmentioning
confidence: 99%
“…The core structure of GCN is its graph convolutional layer, which enables it to combine graph structure (cell location) and node information (gene expression in specific cell) as inputs to a neural network. Since the graph structure encodes spatially related cells, GCNs can utilize convolutional layers that underly much of the recent success of neural networks, without directly using image data [29,31].…”
Section: Introductionmentioning
confidence: 99%