2021
DOI: 10.48550/arxiv.2112.12251
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ML4CO: Is GCNN All You Need? Graph Convolutional Neural Networks Produce Strong Baselines For Combinatorial Optimization Problems, If Tuned and Trained Properly, on Appropriate Data

Abstract: The 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition was designed with the goal of improving state-of-the-art combinatorial optimization solvers by replacing key heuristic components with machine learning models. The competition's main scientific question was the following: is machine learning a viable option for improving traditional combinatorial optimization solvers on specific problem distributions, when historical data is available? This was motivated by the fact that in ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…Some other factors that complicate a stringent comparison are differences in train-test 61 , and differences in used loss metrics (a mean absolute error loss was found to yield lower overall test errors 23 than the mean squared error loss used in previous models 22 , although the mean absolute error is typically given as a benchmark reference). It has to be noted that hyperparameters are generally very important and are often not exhaustively optimized for GNNs which can cause differences in performance apart from the model architecture [125][126][127] .…”
mentioning
confidence: 99%
“…Some other factors that complicate a stringent comparison are differences in train-test 61 , and differences in used loss metrics (a mean absolute error loss was found to yield lower overall test errors 23 than the mean squared error loss used in previous models 22 , although the mean absolute error is typically given as a benchmark reference). It has to be noted that hyperparameters are generally very important and are often not exhaustively optimized for GNNs which can cause differences in performance apart from the model architecture [125][126][127] .…”
mentioning
confidence: 99%