2021
DOI: 10.1007/s41019-021-00155-3
|View full text |Cite
|
Sign up to set email alerts
|

Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art

Abstract: Graphs have been widely used to represent complex data in many applications, such as e-commerce, social networks, and bioinformatics. Efficient and effective analysis of graph data is important for graph-based applications. However, most graph analysis tasks are combinatorial optimization (CO) problems, which are NP-hard. Recent studies have focused a lot on the potential of using machine learning (ML) to solve graph-based CO problems. Most recent methods follow the two-stage framework. The first stage is grap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(18 citation statements)
references
References 52 publications
0
17
0
1
Order By: Relevance
“…Note that autoregressive models can be also combined with quantum neural networks as done with the classical neural networks [32]. Also note that autoregressive models can be formulated as combinatorial optimization problems in the framework of graph learning [33]. In addition, it is shown that quadratic unconstrained optimization formulation can be used for forecasting in finance [34].…”
Section: B Quantum Optimization For Autoregressive Modelsmentioning
confidence: 99%
“…Note that autoregressive models can be also combined with quantum neural networks as done with the classical neural networks [32]. Also note that autoregressive models can be formulated as combinatorial optimization problems in the framework of graph learning [33]. In addition, it is shown that quadratic unconstrained optimization formulation can be used for forecasting in finance [34].…”
Section: B Quantum Optimization For Autoregressive Modelsmentioning
confidence: 99%
“…Table 10 compares the total cost of finding the optimal contraction path and the actual time of executing the contraction for different algorithms and networks. Recall that for QC applications, each network has to be contracted multiple times to get a high-fidelity estimate of the circuit output [Peng et al, 2021]. Here, we assume that we contract each network = 10 6 times [Pan et al, 2021b].…”
Section: A Additional Experimental Details A1 Hyperparametersmentioning
confidence: 99%
“…Following the recent success of ML for combinatorial optimization [Dai et al, 2017, Li et al, 2018, Sato et al, 2019, Almasan et al, 2019, Nair et al, 2020, Cappart et al, 2021, Peng et al, 2021, we devise a novel and effective Reinforcement Learning (RL) approach to solve TNCO. We formulate the problem as a Markov Decision Process (MDP): The state space is defined as a space of graphs, the action space is the set of edges to be contracted, and the transition function maps a graph to a contracted graph based on the selected edge.…”
Section: Introductionmentioning
confidence: 99%
“…The procedure aims to map nodes, so the similarity in the embedding space approximates similarity in the network. 81,82 Training can be unsupervised,…”
Section: Learning Features From Networkmentioning
confidence: 99%
“…It means to map nodes to a d‐dimensional embedding space (low dimensional space rather than the actual dimension of the graph) to embed close to each other similar nodes in the graph. The procedure aims to map nodes, so the similarity in the embedding space approximates similarity in the network 81,82 . Training can be unsupervised, semi‐supervised or supervised.…”
Section: Introductionmentioning
confidence: 99%